To be done:
- Implement the book exercises in ROS2, which runs on RAE
November 28, 2024
- Lets see if I can make vsfm on Ubuntu 20.04.
- Skipped in the makefile the old static libs. After adding -llapack -ljpeg the tool is build.
- The Gui starts. Adding the images gives:
VisualSFM:45660): GLib-GObject-WARNING **: 11:39:05.459: ../../../gobject/gsignal.c:2086: type 'GtkRecentManager' is already overridden for signal id '319'
Segmentation fault (core dumped)
- Used gdb backtrace to see that the program crashed on jpeg_CreateDecompress () (/lib/x86_64-linux-gnu/libjpeg.so.8, called by BitmapRGBI::OpenJPEG()
- When I try to do sudo apt-get install --dry-run libjpeg62-dev 131 (ROS) packages will be removed.
- Tried to build vsfm on nb-ros (Ubuntu 18.04). Also linked to libjpeg.so, so also crashes on loading a jpg.
- Loading a pgm actually worked. So look if I can modify the script to pgms.
- pgm are bitmaps that expect a gray image. That seems to work. Other option would be ppm. Made a list of pgms, which could be loaded (although more than 500, so rest is skipped. No libsiftgpu.so, so no matches.
- Let see if I can make siftgpu on ws9 (with Nvidia-card. Note 2012 build instructions.
- The option siftgpu_enable_cuda is true per default. Switched it off.
- Missing library IL, so did sudo apt install libdevil-dev.
- Code still fails on SiftMatchCU::CheckCudaDevice(int)
- Compilation both loads ./include/GL/glut.h and /usr/include/GL/gl.h
- Anyway, that is for the viewer. The command make siftgpu only builds the libsiftgpu.so. When specified in LD_LIBRARY_PATH, the matches are used in the reconstruction. After SIFT 0249 the program crashes on BasicImage::GetPixelMatrix(), called from BasicImage::ValidateFeatureData().
- The tool is made from VisualSFM.a, so no option to inspect the code.
- Looked into position-indepented executables. Adding gcc -static, removes the -fPIE warning, but now all dynamic libraries can no longer be found (including dynamic loading of libsiftgpu.so).
November 27, 2024
- The VisualSFM tool is based on 3DV 2013 paper.
- The tool should be configurable with multiple algorithms. As indicated in documentation, you need SiftGPU for feature detection and matching, PBA for sparse reconstruction, PMVS/CMVS for dense reconstruction.
-
- Received the rosbag recorded in the maze.
- Following Recording and playing back data tutorial.
- Did ros2 bag info rosbag2_2024_11_27-13_04_36_Correct/rosbag2_2024_11_27-13_04_36_0.db3, and received:
Files: rosbag2_2024_11_27-13_04_36_Correct/rosbag2_2024_11_27-13_04_36_0.db3
Bag size: 508.8 MiB
Storage id: sqlite3
Duration: 248.446s
Start: Nov 27 2024 13:04:37.390 (1732709077.390)
End: Nov 27 2024 13:08:45.837 (1732709325.837)
Messages: 63709
Topic information: Topic: /diff_controller/odom | Type: nav_msgs/msg/Odometry | Count: 4477 | Serialization Format: cdr
Topic: /imu/data | Type: sensor_msgs/msg/Imu | Count: 13750 | Serialization Format: cdr
Topic: /odometry/filtered | Type: nav_msgs/msg/Odometry | Count: 5534 | Serialization Format: cdr
Topic: /rae/imu/data | Type: sensor_msgs/msg/Imu | Count: 13756 | Serialization Format: cdr
Topic: /rae/imu/mag | Type: sensor_msgs/msg/MagneticField | Count: 13755 | Serialization Format: cdr
Topic: /rae/right/camera_info | Type: sensor_msgs/msg/CameraInfo | Count: 4436 | Serialization Format: cdr
Topic: /rae/right/image_raw/compressed | Type: sensor_msgs/msg/CompressedImage | Count: 4459 | Serialization Format: cdr
Topic: /tf | Type: tf2_msgs/msg/TFMessage | Count: 3541 | Serialization Format: cdr
Topic: /tf_static | Type: tf2_msgs/msg/TFMessage | Count: 1 | Serialization Format: cdr
- Did bag play rosbag2_2024_11_27-13_04_36_Correct/rosbag2_2024_11_27-13_04_36_0.db3, but do not see any mesurements in rviz2. Maybe I should set the domain (not specified in metadata.yaml)
- Now I get the compressed images in rviz2 (subscribed before on the uncompressed raw images).
- Looking if this script works.
- This script works with rosbags-image package. Adjusted export_jpg_images.py to work on the rae-topic.
- The Linux installation instructions of VisualSFM point to this Ubuntu 12.04 installation script, which lists 6 algorithms to be installed.
- Created in my Ubuntu 20.04 home ~/packages/vsfm.
- Pure make fails with multiple times: relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIE.
- Tried sudo apt-get install libgtk2.0-dev libglew1.6-dev libglew1.6 libdevil-dev libboost-all-dev libatlas-cpp-0.6-dev libatlas-dev imagemagick libatlas3gf-base libcminpack-dev libgfortran3 libmetis-edf-dev libparmetis-dev freeglut3-dev libgsl0-dev, which fails on v1.6.
- Instead, did udo apt-get install libgtk2.0-dev libglew-dev glew-utils libdevil-dev libboost-all-dev libatlas-cpp-0.6-dev libatlas-dev imagemagick libatlas-dev libcminpack-dev libgfortran3 libmetis-edf-dev libparmetis-dev freeglut3-dev libgsl-dev.
- Still the same error, the precompiled libraries seems to be outdated.
- Lets try to use snap (0.5.26-2 from Alberto Mardegan (mardy) installed).
- Running /snap/visualsfm-mardy/current/bin/VisualSFM works, although I get the warning Unable to write to log file
/snap/visualsfm-mardy/3/bin/log/[24_11_27][15_10_14][95].log.
- The command snap/visualsfm-mardy/current/bin/VisualSFM sfm ./images/ my_result.nvm nearly works (png instead of jpg images).
- With the script modified to jpg images the images are loaded, although VisualSFM gives the warning unable to load libsiftgpu.so. Result:
##########-------timing------#########
1 Feature Detection finished, 0 sec used
#############################
9939111 pairs to compute match
NOTE: using 3 matching workers
###########-------timing------#########
9939111 Image Match finished, 10 sec used
#############################
Save to /home/arnoud/onderwijs/VAR/3rdAssignment/rosbag2_2024_11_27-13_04_36_Correct/my_result.nvm ... done
0 pairs have two-view models
0 pairs have fundamental matrices
Save to /home/arnoud/onderwijs/VAR/3rdAssignment/rosbag2_2024_11_27-13_04_36_Correct/my_result.nvm ... done
----------------------------------------------------------------
VisualSFM 3D reconstruction, finished
Totally 12.000 seconds used
- Try to make SiftGPU, but that fails on src/SiftGPU/CuTexImage.cpp:35:10: fatal error: cuda_gl_interop.h: No such file or directory.
- Looked into /snap/visualsfm-mardy/3/lib/x86_64-linux-gnu/, but indeed no libsiftgpu.so
- Could be that it is part of other snap of Alberto Mardegan: snap install colmap-mardy.
November 26, 2024
- The boot-repair creates and displays a report when you decide not to send it to the forum.
- The diagnosis found nothing to repair.
- The HDD is not visible, nor by fdisk, nor by gparted.
- Should look for a harddisk-repair disk from ct-magazine.
-
- Looking at the Navigating While Mapping tutorial.
- No background information in the tutorial, just hands-on.
- Reading SLAM Toolbox article (2020).
- The Nav2 layer is explained in Spatio-temporal voxel layer: A view on robot perception for the dynamic world (2020).
- The article indicated that Robot Pose EKF is no longer used, replaced by the Robot Localization package.
- As plugin, the article points to D* lite planner.
- The Navigation stack perception pipeline resides in the Costmap 2D package. It includes an obstacle layer and the VL. The Voxel Layer (VL) can place information of 2D or 3D sensors in dense 3D-layer. It relies on the modified Bresenham's voxel clearing algorithm to get free space.
- Octomap, RTAB-Map and Cartographer can generate 3D-point clouds, but are not designed to be used for navigation or obstacle avoidance in real-time.
- The new Voxel Layer (STVL), includes concepts like voxel decay and decay acceleration.
- The decay acceleration makes use of frustum models of the sensors. 3D laser scanners as the VPL16 use slightly modified frustum model.
- The data structure used is OpenVDB from DreamWorks Animations. These resambles B+trees and octrees.
- Most robots nowadays use docker deployments.
- Could look for the ROSCon 2018 presentation. He mentions at 10m that publishing the voxel map is quite expensive.
- The code can be found at github. The branch is ros2, although the documentation still mentions ros-kinetic.
- Steve mentions in his video also nonpersistent_voxel_layer, which stores only the most recent measurement.
-
- D* is covered in Peter Corke's chapter 5 (section 5.4.2).
- The resource section points to Python Robotics, which includes Dijkstra, A*,D*, Potential Field algorithm.
- Steve LaValle's sofwtare page, contains RTT code.
- Siegwart covers A* and D* at page 385 (Chapter 6).
- Voxels are not in Siegwart's index.
- Peter Corke covers Voxels in 14.7, including with code to fit planes.
- For viewing the Cloud Compare tool is mentioned (C++ based).
- It seems that Open3D has an PointCloud class wrapper.
- Runned Chapter 5 notebook.
November 25, 2024
- Looked into the code of EKF which can be found at ~/.local/lib/python3.8/site-packages/roboticstoolbox/mobile/EKF.py.
- This EKF function can be called with several arguments. With only robot as argument it work as dead reckoning, with robot, sensor & map as localisation,
without map it will start to map creation, SLAM even when V is given.
- V is the estimated odometry noise and P0 the initial covariance matrix.
- Looked into the code of the robot which can be found at ~/.local/lib/python3.8/site-packages/roboticstoolbox/mobile/Vehicle.py. The equation (6.6 and 6.7) indicate that theta_d is used, but that is the angle after the movement. So, in the calculation actually theta is used, the current angle.
- Not sure if this is a bug or a feature. It seems to be a feature, because is the derivate of equation 6.2. Yet, it is not actually clear which theta is actually used in equation 6.2. Book is wrong, comments are misleading, code is good.
- Also looked into the notebook of Chapter 14. From Fig. 14.7 the point P and epipole are drawn, but not the epipolar line. That line is only drawn at the end of the next section (page 598).
-
- Should do a Boot-repair on Barend's old laptop.
November 22, 2024
- Shaodi mentioned SfM with 7-points this week, Davide works with the 8-point and 5-point algorithms.
- This paper makes a comparison between all three algorithms.
- With the 7-point algorithm, there can be either one or three real solutions.
- For more details for the 5-point algorithm, the point to Nister 2004.
-
- Joey pointed me to a discussion on long term support for RAE, which pointed to the discussion on RCV3 (with RCV4 on the horizon in Q4 2024). According to this discussion a RAE based on RCV4 is only expected in Q4 2025.
- This is a list of demos which could be run directly on the device.
- Could look at this Harris detector based on PyTorch.
-
- Downloaded on the Ubuntu partition of nb-dual Chapter 6 notebook in ~/onderwijs/VAR. The code from page 211 (6.1.1 Modelling the robot) works, including an animation that was not in the book.
- The next section shows the EKF pose estimation, based on the motion updates (blue ground-truth, red ekf estimate):
- There is also code to draw the confidence as elipses, comparible with Fig. 6.5.
- The code of EKF can be found at ~/.local/lib/python3.8/site-packages/roboticstoolbox/mobile/EKF.py.
November 21, 2024
- Printing two additional AprilTags. Selected TagSize of 345mm. I could print those in Chrome with scale 105.
-
- Shaodi skipped the Correspondence problem Continued part, and instead already introduced SfM and VO.
- Davide's Lecture 8 covered SfM, so I can look at Lecture 9 and 10.
- Lecture 9 mainly zooms in on RANSAC. Could look into 8-points, followed by 5-points (using motion constraints).
November 14, 2024
- By setting the default settings of lb.ps17 to A3, I could print A3.
- I had chosen size 300mm, yet the AprilTag was 22cm. Choose scale 140 in 2nd try.
- Scaled up to size 345, which should give a TagSize of 276mm.
- Edge gave still same 22cm. Instead used Chrome, no Marges, scale 128. That gave at least a TagSize of 24cm.
-
- Looking into /ws/src/rae-ros/rae_camera/src/camera.cpp. I see several option, including enable_rg which used the 1080 resolution, and enable_depth.
- The script bringup.launch.py doesn't start the camera, but is part of robot.launch.py
- The script robot.launch.py calls rae_camera.launch.py without arguments.
- The only argument (except the name), is the params_file, which defaults to /ws/install/rae_camera/share/rae_camera/config/camera.yaml
- This configuration has a double definition for the imu (overwritting the i_gyro_freq from 400 with 110. For the rgbd mode both the left/right camera have i_output_isp is false, but only for the left_camera the i_publish_topic is false. For the rgb camera no i_publish_topic is specified.
- Launched my own ros2 launch rae_camera rae_camera.launch.py params_file:='/root/share/rae_camera/config/camera.yaml'.
- The result is that I get all four stereo-images, but I don't see the high-resolution rgb:
/rae/imu/data
/rae/imu/mag
/rae/left/camera_info
/rae/left/image_raw
/rae/left/image_raw/compressed
/rae/left/image_raw/compressedDepth
/rae/left/image_raw/theora
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/right_back/camera_info
/rae/right_back/image_raw
/rae/right_back/image_raw/compressed
/rae/right_back/image_raw/compressedDepth
/rae/right_back/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
- Did ros2 param list, which gives a long list, including camera.i_usb_speed. Yet, enable_rgb is not one of those.
- The rae-ros documentation points for details of the configuration to depthai-ros
- Looking around, I found this random-scripts, including a bandwith, latency and PoE test.
- Had to do get-pip.py, before I can do pip install depthai in the docker.
- Now the bandwidth script python3 oak_bandwidth_test.pyfails with:
[2024-11-14 11:19:48.293] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
Traceback (most recent call last):
File "//oak_bandwidth_test.py", line 66, in
with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS) as device:
RuntimeError: No available devices
- Same for python3 poe_test.py: "No POE device found!"
- Note that also the LuxonisOS has not /dev/usb.
- Note that in LuxonisOS python3 get-pip.py also works. After that python3 -m pip install depthai works, although python3 oak_latency_test.py cannot find devices on LuxonisOS.
-
- Tried ros2 launch rae_camera rae_camera.launch.py enable_rgb:=true, but no additional rgb-topic.
- Tried to do ros2 param set rae left.i_publish_topic True on a running node, but no additional topic. Same for ros2 param set rae right_back.i_publish_topic True. The Set parameter is successful, and ros2 param get rae left.i_publish_topic confirms this. Seems not be a runtime-parameter.
- Looked into camera.cpp, but enable_rgb is already true per default. Succesful launch should give "Camera set up, starting publishing", but instead I get:
[component_container-1] [INFO] [1731586328.205822200] [rae]: Finished setting up pipeline.
[component_container-1] [INFO] [1731586328.736970102] [rae]: Camera ready!
- The rae_camera.launch.py doesn't launch its own camera-node, but the plugin='depthai_ros_driver::Camera'.
- When I launch ros2 run rae_camera camera I get the message "Camera set up, starting publishing" and see the topics:
/color/camera_info
/color/image
/color/image/compressed
/color/image/compressedDepth
/color/image/theora
/rae/right_front/camera_info
/rae/right_front/image_raw
/rae/right_front/image_raw/compressed
/rae/right_front/image_raw/compressedDepth
/rae/right_front/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
- Yet, the /color/camera_info gives 400x640
- stereo_pair and stereo_depth both crash on launch a gtk_window (Can't initialize cvInitSystem)
- Same problem on WSL2, for image_view.
-
- Changed to ws9. Now ros2 run image_view image_view --ros-args --remap image:=/rae/right/image_raw/compressed starts (although it shows nothing).
- Couldn't reach 192.168.197.55 nor 146.*.*.42. Reboot of RAE.
- I receive ros2 run image_view image_view --ros-args --remap image:=/color/image, although awfull slow:
- When running ros2 launch rae_camera rae_camera.launch.py params_file:='/root/stereo_camera.yaml', with my own stereo_camera.yaml, I get four different images, although all four in a different size.
- In addition, I get from rviz2 the warning:
[WARN] [1731591462.219058334] [rviz2]: topic '/rae/right/image_raw/compressed' has more than one types associated, rviz will arbitrarily use the type 'sensor_msgs/msg/CompressedImage' -- all types for the topic: 'sensor_msgs/msg/CompressedImage' 'sensor_msgs/msg/Image
- Checked with ros2 topic info /rae/right/image_raw/compressed
Type: ['sensor_msgs/msg/CompressedImage', 'sensor_msgs/msg/Image']
- Instead, ros2 topic info /rae/left/image_raw/compressed gives:
Type: sensor_msgs/msg/CompressedImage
- Looking for tools to nicely visualize the stereo-pair. For ROS1 there was stereo_image_proc.
- The repository is ported to ROS humble.
- Trying apt-get install ros-humble-image-pipeline on RAE.
- Inspecting ros2 launch -s stereo_image_proc stereo_image_proc.launch.py. There are more than 30 parameters, including the option if it should be run in a container.
November 13, 2024
- It would be nice to see how well the stereo-camera's of the RAE are actually alligned, and if Stereo Rectication improves the depth estimates.
- Peter Corke's uses multiple sections on multi-view, but at the end only equation 14.6 corresponds with Davide's slides (partly because Davide uses stereo with a fixed baseline).
- Davide goes from stereo to disparity on slide 52, jumping to point clouds on slide 53, back to correspondence on slide 54.
- In section 14.2.4 (page 609) the correspondences are cleaned with RANSAC.
- Peter Corke's section 14.4 shows disparity from dense-stereo, including illustrations of stereo failure modes (Fig. 14.33). Nice to add that to the presentation.
- For a deep learning approach on stereo matching Peter points to Luo 2016. The website gives only an abstract. The code can be found on bitbucket.
- Cited 934x. A more recent publication of Raquel is DeepPruner (ICCV 2019). Code can be found github.
November 11, 2024
- Kris Kitani's Robostats course syllabus was a broken link. The content is more to focused on Imitation Learning for Robotics, no EKF localization.
- Kris also gives Computer Vision on graduate level, including reconstruction.
- In his undergraduate CV-course (2019) covers Kalman filtering.
- The 2024 version not explictly mentions Kalman anymore, although tracking is covered.
- The 2022 version are closer to our course, with Two-View geometry.
- The 2020 version is the last one I can find, teached by Matthew O'Toole.
-
- The code to calibrate in ROS2 is in the core just two lines (line 81-82):
goodcorners = self.collect_corners(images)
self.cal_fromcorners(goodcorners)
- The function cal_fromcorners actually calls cv2.calibrateCamera().
- The rest is wrapper code to select the right chessboard, detect if it is moving, save the calibration, display the result, etc, etc.
- The reproj_err is calculated, but not displayed in ROS. ROS can also use the rational method, if the correct flag is set.
November 8, 2024
November 7, 2024
- Tried some of the scripts of yesterday. To run ros2 launch rtabmap_examples depthai.launch.py I first had to do sudo apt-get update, followed by sudo apt-get install ros-humble-rtabmap-examples and a download from github.
- The script starts as camera-driver stereo_inertial_node.launch.py from depthai_examples, which publishes the output of a single OAK-D:
/color/camera_info
/color/image
/image/compressed
/image/compressedDepth
/image/theora
/imu
/imu/data
/odom
/odom_info
/odom_info_lite
/odom_last_frame
/odom_local_map
/odom_local_scan_map
/odom_rgbd_image
/parameter_events
/rgbd_image
/rosout
/stereo/converted_depth
/stereo/depth
/stereo/points
/tf
/tf_static
.
- rtabmap itself subscribes to:
/odom
/rgbd_image
/odom_info
- In addition, there is a node rgbd-sync, which subscribes to:
/right/image_rect
/stereo/depth
/right/camera_info
- I also see imu_filter_madgwick_node, which waits on /imu/data_raw.
- I also see rgbd_odometry, which subscribes to /rgbd_image.
- The stereo_inertial_node died, because there are no available devices (3 connected, but in use). This node starts well, with reading out the Device Mx ID.
- Also the robot_state_publisher started well (although this are the frames of the OAK-D, not RAE):
[robot_state_publisher-1] [INFO] [1730968049.282453203] [oak_state_publisher]: got segment oak-d-base-frame
[robot_state_publisher-1] [INFO] [1730968049.282876080] [oak_state_publisher]: got segment oak-d_frame
[robot_state_publisher-1] [INFO] [1730968049.282932580] [oak_state_publisher]: got segment oak_imu_frame
[robot_state_publisher-1] [INFO] [1730968049.282971247] [oak_state_publisher]: got segment oak_left_camera_frame
[robot_state_publisher-1] [INFO] [1730968049.283007789] [oak_state_publisher]: got segment oak_left_camera_optical_frame
[robot_state_publisher-1] [INFO] [1730968049.283044414] [oak_state_publisher]: got segment oak_model_origin
[robot_state_publisher-1] [INFO] [1730968049.283080581] [oak_state_publisher]: got segment oak_rgb_camera_frame
[robot_state_publisher-1] [INFO] [1730968049.283116706] [oak_state_publisher]: got segment oak_rgb_camera_optical_frame
[robot_state_publisher-1] [INFO] [1730968049.283152956] [oak_state_publisher]: got segment oak_right_camera_frame
[robot_state_publisher-1] [INFO] [1730968049.283188748] [oak_state_publisher]: got segment oak_right_camera_optical_frame
- Also see [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/point_cloud_xyzrgb_node' in container '/container'
-
- Tried to connect a real OAK-D to the top USB. Didn't find the OAK-D, so connected the ZED-mini, but that is not visible with dmesg | tail from the docker. Had to use web archive again to find the section on USB ports. Yet, I need to disable communication with the side USB to do this.
- Logged in via wifi. Did
gpioset gpiochip0 44=1
echo host > /sys/kernel/debug/usb/34000000.dwc3/mode
- Connected the zed, and got:
[ 5279.910894] hid-generic 0003:2B03:F681.0001: hiddev96,hidraw0: USB HID v1.11 Device [STEREOLABS ZED-M Hid Device] on usb-xhci-hcd.0.auto-1.2/input0
- Running ros2 launch rtabmap_examples zed.launch.py fails on missing zed_wrapper package. Yet, this would require ZED-SDK and CUDA.
- Instead tried a RealSense D435. Unfortunatelly, the D435i is still missing. dmesg | tail gave:
usb 2-1: new SuperSpeed Gen 1 USB device number 2 using xhci-hcd
- Running ros2 launch rtabmap_examples realsense_d435i_stereo.launch.py works (partly). As expected, the imu_filter_madgwick_node receives no IMU data. Again, rtabmap_viz-3 fails on a missing symbol. Should run rtabmap with subscribe_odom_info = false. To my surprise subscribe_rgbd = false (rgbd_cameras=1).
- rtabmap subscribes to:
[rtabmap-2] /odom \
[rtabmap-2] /camera/infra1/image_rect_raw \
[rtabmap-2] /camera/infra2/image_rect_raw \
[rtabmap-2] /camera/infra1/camera_info \
[rtabmap-2] /camera/infra2/camera_info \
[rtabmap-2] /odom_info
- stereo_odometry subscribes to four camera topics above, but also don't receive any information.
- Looking for how to activate the top USB, I found this SLAM discusion.
- Looked into the arguments of the camera driver with ros2 launch -s realsense2_camera rs_launch.py. There is a device type, but that defaults to ''.
- Did first a apt-get update, which updates several rviz packages.
- Will try ros2 launch realsense2_camera rs_launch.py device_type:=d435 enable_infra1:=true enable_infra2:=true enable_sync:=true
- Yet, this fails with "No Realsense devices were found!" Tried again, after only apt-get install ros-humble-realsense2-camera. Still, No RealSense device found (although dmesg looks OK).
- Installed lsusb with apt-get install usbutils and I see Bus 002 Device 004: ID 8086:0b07 Intel Corp. RealSense D435
- Lets try if the usb.rules are set. First do apt install v4l-utils. Had to download the 99-realsense-libusb.rules to config (should go to /etc/udev/rules.d. Script at the end does udevadm control --reload-rules && udevadm trigger. Yet, my docker keeps on crashing.
- Tried again. Yet, udevadm control --reload-rules fails on Failed to send reload request: No such file or directory. Started the udev daemon manually with /lib/systemd/systemd-udevd --daemon. Then I could reload the rules. Still, no RealSense devices found. Should check with ws9
-
- Instead, tried ros2 launch rae_bringup rtabmap.launch.py. See nothing happening. Tried ros2 launch rae_bringup slam.launch.py. At least I see three info messages:
[async_slam_toolbox_node-1] [INFO] [1730981239.825267328] [slam_toolbox]: Node using stack size 40000000
[async_slam_toolbox_node-1] [INFO] [1730981239.951552820] [slam_toolbox]: Using solver plugin solver_plugins::CeresSolver
[async_slam_toolbox_node-1] [INFO] [1730981239.952330281] [slam_toolbox]: CeresSolver: Using SCHUR_JACOBI preconditioner.
- I see the following topics published:
/map
/map_met
adata
/parameter_e
vents
/pose
/rosout
/scan
/slam_toolbox/feedback
/slam_toolbox/graph_visualization
/slam_toolbox/scan_visualization
/slam_toolbox/update
/tf
/tf_static
- Now, when I start ros2 launch rae_bringup rtabmap.launch.py, I get the updates in the other terminal!
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rectify_color_node' in container 'rae_container'
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/laserscan_kinect_front' in container 'rae_container'
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rtabmap' in container 'rae_container'
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/laserscan_kinect_back' in container 'rae_container'
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/laser_scan_multi_merger' in container 'rae_container'
- I now also see new topics, like:
/cloud_ground
/cloud_map
/cloud_obstacles
global_path
/global_path_nodes
/global_pose
/goal
/goal_node
/goal_out
/goal_reached
/gps/fix
/grid_prob_map
/landmarks
/local_grid_empty
/local_grid_ground
/local_grid_obstacle
/local_path
/local_path_nodes
/localization_pose
/map
/mapData
/mapGraph
/mapOdomCache
/mapPath
/map_metadata
/merged_cloud
/octomap_binary
/octomap_empty_space
/octomap_full
/octomap_global_frontier_space
/octomap_grid
/octomap_ground
/octomap_obstacles
/octomap_occupied_space
/odometry/filtered
/pose
/rae/scan_back
/rae/scan_front
/scan
/set_pose
- When I do ros2 topic echo /scan I get response. Unfortunally, rviz2 doesn't work on my WLS2 (error while loading shared libraries: libQt5Core.so.5.
-
- Back to the Ubuntu partition of nb-dual. Did conda activate ros_humble_env.
- Next I tried mamba install ros-humble-realsense2-camera, which fails on dependency on boost-cpp and gazebo. Mamba claims that they can be installed, although is not clear how.
- Tried mamba install boost-cpp=1.82.0, which has a dependenc on libboost and icu, which I couldn't get solved.
- According to robostack humble, it should be posible for linux-64.
- Did mamba list | grep boost, which gives v1.78.0, instead of 1.82.0
- Lookded with mamba list | grep mutex for ros-distro-mutex, which gives version 0.3.0, instead of 0.5.0. Conflict seems to be icu, which is version 70.1, instead of 72.1 or 73.2. Still Could not solve for environment specs. Tried instead conda install icu=73.2. That takes ages to solve.
-
- Back to native Ubuntu20.04.
- Did sudo apt-get install ros-noetic-realsense2-camera ros-noetic-realsense2-description, followed by roslaunch realsense2_camera rs_d435_camera_with_model.launch. That works:
November 6, 2024
- The OAK-D 4 is now out, including Luxonis OS and 8-core ARM CPU (Snapdragon 8)
-
- Looked at the parameter options of rtab on RAE with ros2 launch -s rae_bringup rtabmap.launch.py, only two arguments are possible: name - default "rae" and params_file - default camera.yaml.
- Note that the default location is not /ws/src/rae-ros/rae-bringup/config. Yet, one can find slam_param.yaml there, which contains the the nav2 settings, including those from acml and the slam_toolbox.
- The camera.yaml can actually be found in /ws/install/rae_camera/share/rae_camera/config/ together with laserscan_kinect.yaml.
- Tried on the RAE ros2 launch -s rtabmap_launch rtabmap.launch.py, which fails because in /opt/ros/humble/share/ contains only rtabmap, rtabmap_conversions, rtabmap_msgs, rtabmap_odom, rtabmap_slam, rtabmap_sync, rtabmap_util and not rtabmap_launch.
- Could do sudo apt-get install ros-humble-rtabmap-launch, which also installs ros-humble-rtabmap-rviz-plugins ros-humble-rtabmap-viz
- Note that I get everytime unable to resolve host rae-7: Name or service not known.
- Now ros2 launch -s rtabmap_launch rtabmap.launch.py works, which gives around 70 parameters.
- Installed also sudo apt-get install ros-humble-rtabmap-examples, which gave the additional packages libdbus-1-dev libegl1-mesa-dev libglfw3 libglfw3-dev libpcap0.8 libpcap0.8-dev ros-humble-librealsense2
ros-humble-realsense2-camera ros-humble-realsense2-camera-msgs ros-humble-velodyne ros-humble-velodyne-driver
ros-humble-velodyne-laserscan ros-humble-velodyne-msgs ros-humble-velodyne-pointcloud
- Yet, depthai.launch.py is not in the official humble-release yet (nor zed), so downloaded it from github.
- Now ros2 launch -s rtabmap_examples depthai.launch.py gives around 50 parameters. Note that camera_model should be `OAK-D, OAK-D-LITE`.
- This script launches stereo_inertial_node.launch.py from depthai_examples, instead of the depthai_ros_driver
- On the RAE ros2 launch -s depthai_examples stereo_inertial_node.launch.py also works.
- The directory /underlay_ws/install/depthai_ros_driver/share/depthai_ros_driver/launch/ has several launch files, including camera.launch.py and rtabmap.launch.py. camera.launch.py uses as default OAK-D as camera_model. Looked at ros2 launch -s depthai_ros_driver camera.launch.py.
-
- Looking at SLAM Toolbox. The tutorial how to use it with NAV2 is still available at web archive.
- The SLAM Toolbox article highlights that most laser scanner SLAM algorithms require too much resources to be run realtime, except Cartographer (no longer maintained). It replaces Gmapping (ROS1) as the default SLAM algorithm in ROS2.
- Steve Macenski also made a comparison of VSLAM approaches
- Although they considere RTAB-Map, at the end OpenVSLAM was selected as default VSLAM algorithm (although it has no IMU support). LPSLAM integrates OpenVSLAM and Nav2, together with IMU support.
- Yet, OpenVSLAM is terminated.
- LPSLAM still has a OpenVSLAM fork. The tutorial of OpenVSLAM is still on web archive.
- To use VSLAM with Nav2 we could better look at this tutorial. It is ZED based, but should be applicable to any stereo-camera. Seems to be only VO, no VSLAM.
- Darie used Open3D before, which has Dense RGB-D SLAM tutorial.
November 5, 2024
- Wanted to print https://www.jamris.org/index.php/JAMRIS/article/view/428>A Set of Depth Sensor Proceing ROS Tools for Wheeled Mobile Robot Navigation (2017) and ira_laser_tools: a ROS LaserScan manipulation toolbox (2014), but the printer software didn't work under Linux.
- The workspace on the RAE robots were ported to ROS2 by Adam Serafin.
- The last commit on depth_nav_tools was Dec 7, 2023. Checked with git fetch --dry-run --verbose, the RAE ws is [up tp date].
- The last commit on ira_laser_tools was Jul 28, 2023. Checked, [up to date]
- The IRA Laser Tools mentioned gmapping: no official ROS2 version.
- The IRA Laser Tools mentioned acml: no official ROS2 version.
- The IRA Laser Tools mentioned navigation stack: nav2 is a official ROS2 version, and is amcl-based.
- The original navigation stack was ROS1 based, not sure how close they are related.
- The original paper is from ICRA 210
- Found in Nav2 about: "Navigation in ROS 2 builds on the wild success of the original Navigation Stack (Nav Stack) in ROS (1)."
- If I want to know more (although it is Shaodi's lecture), I could start with reading concepts
- Inside Nav2, acml is used as localization algorithm, the SLAM toolbox is used as default SLAM algorithm. No other algorithms are mentioned.
- The best overview paper of Nav2 is probably The Marathon 2: A Navigation System (IROS 2020)
- According to the ROSCON 2021 presentationi (page 29), Visual SLAM is near on the horizon.
- The discussion on Nav2 can be found on discourse.ros.org
- With ros2 run rtabmap_slam rtabmap --params you can see all params (a lot).
- Looking around on how the CoreWrapper plugin works. This post also uses the CoreWrapper for ROS Humble. Because of the usage of resources, Mathieu points to Increase Speed section.
- For RTAB-map, start here.
- At least found the source-code of the CoreWrapper.
- As expected, the Wrapper indicates that subscribe_rgbd parameter should exist.
- Note that rtab-map can make use of apriltags.
- The CMakeLists.txt indicates that the executable makes use of rtabmap_slam_plugins. Seems to be defined in nodelet_plugins.xml. As target I see only rtabmap_slam_plugins and rtabmap_node. The plugins should be found at lib/librtabmap_slam_plugins, which is actually /opt/ros/humble/lib/librtabmap_slam_plugins.so. rtabmap_node cannot be found.
- Standalone RTAB-map has several tutorials. The depth calibration and parameter change look the most interesting, but some links are missing. The links can be found at the menu at the right!
- Looked at Change parameters tutorial. Interesting is the SLAM mode. If false, the localisation on the prebuilt map is performed. Additional, you can select your Visual Word type.
- If ROS is installed, you can also directly execute the standalone application rtabmap. This page indicates that parameter and topic names should be the same for ROS1 and ROS2 (yet not all nodes are tested yet).
- The ROS2 launch example is based on the zed_wrapper.
- Another example is with the turtlebot3 simulator. All demos are simulated based.
- Yet, in examples there is also a deptai.launch.py, based on the OAK-D. There is also a vlp16.launch.py
November 4, 2024
- At nb-dual, I didn't install Peter Corke's code. So did pip install rvc3python.
- Install seems to fail on the robotictoolbox, where the metadata is 'unknown'.
- Overruled the warning with python3 -m pip install --upgrade --no-cache-dir --use-deprecated=legacy-resolver roboticstoolbox-python.
- The toolbox should not be imported from python3, but rvctool from the commandline. Get some warnings (Peter assumes python3.8, I have python3.10, so my NumPy==1.26.4 is to modern for SciPy==1.8.0, could download to Numpy==1.24 to be sure)
- Did python3 -m pip install numpy==1.24.4. Warning is gone.
- For today, the program assignment of 13.8.2.3 was given, which is flying a camera through a cube. So, this requires that first 13.8.2.1 and 13.8.2.2 is programmed.
- Exercise 13.8.2.1 requires to use spatialmath, so from spatialmath import * should also work. Have to check which package contains the Camera class.
- After from machinevisiontoolbox import * I could do camera = CentralCamera(f=0.015);
- Inside the toolbox, there is central-perspective camera with a 15mm lens. Maybe update Exercise 1 from f=15 to f=0.015, as on page 544.
- Anyway, section 13.1.2 nicely solves Q1.b
- Running twistdemo gives Could not connect to the Swift simulator
- Started XLaunch under Windows10. Still the Swift simulator fails.
- Don't see much documentation for spatialmath. Looking into the package itself. Inside base there is graphics.py, which can plot_cube().
- Peter was inspired by this discussion.
- The documentation of spatialmath seems to be here.
- The reference manual for plot_cuboid() can be found here.
- Went back to Graphics from spatialmath intro, but X.plot() returns nothing (not even a warning, but it returns). Time to switch to native Ubuntu 20.04.
- The toolbox was already installed. Yet, the plot gives nothing, so this seems Jupyter notebook based.
- The notebook is using python3.6, instead of the standard python3.8.
- Changed the ~/.local/share/jupyter/kernels/python3/kernel.json from /usr/bin/python3.6 to /usr/bin/python3.8, which works, the frame is plotted.
- The function plot_cylinder() is actually from spatialmath.base. cylinder itself is not in this toolbox, lets check spatialgeometry.
- The documentation for Spatial Geometry is maintained by Jesse Haviland. Cylinders are not explicit mentioned, only obstacle = rtb.Box([1, 1, 1], SE3(1, 0, 0)).
- The documentation of the Robotics Toolbox package itself is here. Actually, in the notebook of Chapter 13 the rtb is not used, only the mvtb and smtb.
- The last toolbox that is part of the rvc_tool is Machine Vision Toolbox.
- The documentation was not explicit about the rotating the cylinder with ax. In this discussion, Peter points to base/axes_logic in spatialmath. It seems to be inspired by matlab code.
- I have now the code, although the cylinder splits when I rotate it with the pose (cube responds well), and the sphere doesn't react on the transpose in the pose (only on reacts on the center parameter):
-
- Exercise 13.8.2.2 asks to fly the camera around the cube (not the sphere).
- The support for Animate only works for objects like Line
- Lookinto this wireframe example
- Found the object to manipulate. Actually, the cylinder has two childs, which explains the strange behavior.
- Made the animation (seq of images, but I think that my initial approach (looking with the camera at a cube) was the actual intension:
-
- One RAE didn't get a ipv4 address, so I manually gave it an address with sudo ip addr add 146.*.*.210 dev w0*.
- In the Chapter 13 notebook, Peter creates the cube with cube = mkcube(0.2, pose=SE3.Tz(1));.
November 1, 2024
- Still an error, but I also see no new depthai_ros_driver/src/param_handlers/camera_param_handler.cpp.
i
- At least the camera_param_handler.hpp has a clear update:
< explicit CameraParamHandler(rclcpp::Node* node, const std::string& name);
---
> explicit CameraParamHandler(std::shared_ptr node, const std::string& name);
- Downloaded a new version of camera_param_handler.cpp, and the declaration is clearly updated:
< CameraParamHandler::CameraParamHandler(std::shared_ptr node, const std::string& name) : BaseParamHandler(node, name) {
---
> CameraParamHandler::CameraParamHandler(rclcpp::Node* node, const std::string& name) : BaseParamHandler(node, name) {
- Also updated the base_parameter_handler.hpp
- Now I get a new error message in the compilation:
/underlay_ws/src/depthai-ros/depthai_ros_driver/include/depthai_ros_driver/param_handlers/base_param_handler.hpp:55:43: error: ‘getSocketName’ is not a member of ‘depthai_ros_driver::dai_nodes::sensor_helpers’
55 | return dai_nodes::sensor_helpers::getSocketName(getROSNode(), socket);
- Next error:
/underlay_ws/src/depthai-ros/depthai_ros_driver/src/param_handlers/imu_param_handler.cpp:11:108: error: no matching function for call to ‘depthai_ros_driver::param_handlers::BaseParamHandler::BaseParamHandler(rclcpp::Node*&, const string&)
11 | rclcpp::Node* node, const std::string& name) : BaseParamHandler(node, name) {}
- Next error was to be expected:
underlay_ws/src/depthai-ros/depthai_ros_driver/src/dai_nodes/sensors/sensor_helpers.cpp:105:37: error: ‘detectSubscription’ was not declared in this scope
105 | if(rclcpp::ok() && (!lazyPub || detectSubscription(imgPub, infoPub))) {
October 31, 2024
- Many of the /data/persistant/home/robothub/containers are Hunter based.
- One of the examples of a hunter-based package is OpenCV, with the frozen Hunter-package (5 years old) on github. Here is a sample how to use it.
-
- Looked into /ws/src/rae-ros/rae_camera/launch/rae_camera.py, It starts a container, but I see only one node the Camera from depthai_ros_driver. Only additional is the reset_pwm at start. Set the environment on DEPTHAI_DEBUG=1
- That gives much too much information:
[component_container-1] [58927016838860C5] [127.0.0.1] [1730381498.617] [StereoDepth(12)] [debug] Stereo descriptor took 19.874 ms
- Started again, see for instance this extra information:
[component_container-1] [INFO] [1730381602.592439815] [rae]: Device type: RAE
[component_container-1] [DEBUG] [1730381602.855832938] [rae]: Socket 1 - OV9782
[component_container-1] [DEBUG] [1730381602.856035229] [rae]: Socket 0 - IMX214
[component_container-1] [DEBUG] [1730381602.856203104] [rae]: Socket 2 - OV9782
[component_container-1] [DEBUG] [1730381602.856246145] [rae]: Socket 3 - OV9782
[component_container-1] [DEBUG] [1730381602.856282770] [rae]: Socket 4 - OV9782
[component_container-1] [DEBUG] [1730381602.857186727] [rae]: Device has no IR drivers
- Completely at start:
[component_container-1] [DEBUG] [1730381597.082923112] [rcl]: Initializing node 'ComponentManager' in namespace ''
...
[component_container-1] [INFO] [1730381597.851199731] [rae_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate
[component_container-1] [DEBUG] [1730381597.851711188] [rcl]: Arg 1 (-r) is not a --param nor a -p flag.
[component_container-1] [DEBUG] [1730381597.851780146] [rcl]: Got remap rule : __node:=rae
[component_container-1]
[component_container-1] [DEBUG] [1730381597.851832063] [rcl]: Initializing node 'camera' in namespace
...
component_container-1] [DEBUG] [1730381597.886006664] [rae]: Setting param camera.i_publish_tf_from_calibration with value 0
...
[component_container-1] [2024-10-31 13:33:17.888] [depthai] [debug] Library information - version: 2.22.0, commit: 3c800a05572ae16de825925370587df1c511aac5 from 2023-08-26 15:42:33 -0400, build: 2023-11-09 16:01:53 +0000
...
[component_container-1] [2024-10-31 13:33:18.163] [depthai] [debug] Resources - Archive 'depthai-bootloader-fwp-0.0.26.tar.xz' open: 9ms, archive read: 261ms
[component_container-1] [2024-10-31 13:33:19.280] [depthai] [debug] Resources - Archive 'depthai-device-fwp-0e0d821769c2f29fc3208fe209463219c3ae3845.tar.xz' open: 9ms, archive read: 1382ms
[component_container-1] [INFO] [1730381602.560901042] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[component_container-1] [DEBUG] [1730381597.083012279] [rcl]: Using domain ID of '42'
...
[component_container-1] [DEBUG] [1730381602.867051916] [pluginlib.ClassLoader]: XML file has no lookup name (i.e. magic name) for class depthai_ros_driver::pipeline_gen::RGB, assuming lookup_name == real class name.
...
component_container-1] [DEBUG] [1730381602.867522790] [pluginlib.ClassLoader]: Exiting determineAvailableClasses()...
[component_container-1] [DEBUG] [1730381602.867556082] [pluginlib.ClassLoader]: Finished constructring ClassLoader, base = depthai_ros_driver::pipeline_gen::BasePipeline, address = 0x7fd10f1248
[component_container-1] [DEBUG] [1730381602.867598665] [pluginlib.ClassLoader]: Attempting to create managed (unique) instance for class depthai_ros_driver::pipeline_gen::Rae.
...
component_container-1] [DEBUG] [1730381602.868775663] [pluginlib.ClassLoader]: [search path for 'depthai_ros_driver']: '/underlay_ws/install/depthai_ros_driver/lib/liblibdepthai_ros_driver.so'
- Launched in one terminal the camera.
- Launched in another terminal the actuatos with ros2 launch rae_hw control.launch.py enable_localization:=false
- At least ros2 run teleop_twist_keyboard teleop_twist_keyboard works. I see no image in rviz2, but there is also something wrong with the frames (only map available). Same when enable_localization:=true. Also no image for image_view.
- Looking with top to used CPU. There are two processes using most of the CPU:
- depthai-device- (110%)
- ros2_control_node (45%)
Further I see
- component_container (9%)
- ekf_node (8%)
- robot_state_pub (6%)
- mic_node (1%)
- Removed the Failed to load module "xapp-gtk3-module" error with sudo apt install xapp. Still nothing to see. Although I now receive:
[INFO] [1730384259.784858110] [image_view_node]: Using transport "raw"
- Not that strange, robot was connected to another wireless network.
- ros2 topic echo /rae/right/camera_info works on the robot.
- The command ros2 node list now gives:
WARNING: Be aware that are nodes in the graph that share an exact name, this can have unintended side effects.
/complementary_filter_gain_node
/controller_manager
/diff_controller
/ekf_filter_node
/image_subscriber
/joint_state_broadcaster
/launch_ros_468
/launch_ros_706
/mic_node
/rae
/rae_container
/rae_container
/robot_state_publisher
/speakers_node
/transform_listener_impl_55aa114b10
- Looked with ros2 lifecycle nodes, but none of the nodes has lifecylce support.
- To be done:
- Give each the two containers a different name
- Make sure that the ekf_filter_node, mic_node and speakers_node are not running.
- Finetune the parameter settings of the camera, so that it not uses 110%.
- For the last issue, see this post, which redirects to this readme.
- Note that the depthai-ros has a USB-speed update.
- Looking at the RAE in /underlay_ws/src/depthai-ros/depthai-ros. The CMakeLists.txt indicates that the project is VERSION 2.8.2, while the USB-speed update was version 2.10.3.
- I can update the CMakeLists.txt
- Did git remote show origin, which showed:
Fetch URL: https://github.com/luxonis/depthai-ros.git
Push URL: https://github.com/luxonis/depthai-ros.git
HEAD branch: humble
- So, it looks that I looked at the correct part, no idea why CMakeLists.txt
is not updated.
- Yet, there is also a remote branch rae_pipeline_humble tracked.
- That version is 93 commits ahead, but 26 commits behind humble.
- The latest one is the USB-speed update. Two files are affected: depthai_ros_driver/src/camera.cpp and depthai_ros_driver/src/param_handlers/camera_param_handler.cpp
- Did colcon build --packages-select depthai_ros_driver in /underlay_ws, which worked.
- At the end, got the two changes files with a wget from the raw-github, which no longer builds:
/underlay_ws/src/depthai-ros/depthai_ros_driver/src/param_handlers/camera_param_handler.cpp:10:1: error: no declaration matches ‘depthai_ros_driver::param_handlers::CameraParamHandler::CameraParamHandler(std::shared_ptr, const string&)’
10 | CameraParamHandler::CameraParamHandler(std::shared_ptr node, const std::string& name) : BaseParamHandler(node, name) {
- Check tomorrow.
October 30, 2024
- Look if Addison can help. This Tutorial covers the 2nd assignment, although in ROS2 Galatic. Galactic is the version before Humble.
- Prequisites are two other tutorials: How to load a model into Gazbo and register an TF from a AruCo marker.
- Start with Loading a model in Gazebo, which is Foxy based (version before Galactic).
- Checked, ros-humble-xacro was already installed.
- Liked the colcon_cd alias, although I have to remember to change the working directory in .bashrc when I use another directory.
- The urdf from the two_wheeled_robot has a LIDAR and IMU, but no camera. Should check the difference with the diff_drive from the gz_example. Main difference is that this robot is defined in an URDF, while gazebo used SDF.
- The URDF uses gazebo materials, but no plugins.
- Build it succesfully with colcon build --packages-select two_wheeled_robot.
- The launch succesfully launches the rviz window, I don't see the gazebo window. Rviz starts with a red Robot model, because there is no transform between the lidar_link and the base_link. Which is strange, because I get both:
robot_state_publisher-2] [INFO] [1730278421.312988705] [robot_state_publisher]: got segment imu_link
[robot_state_publisher-2] [INFO] [1730278421.312991653] [robot_state_publisher]: got segment lidar_link
- Problems seems to be due to the other gazebo servers still running in the background, publishing its own robot_description. Yet, I still only see rviz, and the robot doesn't respond to ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 1.0}}" -1:
Waiting for at least 1 matching subscription(s)...
- Also no /cmd_vel among the topics.
- Running ros2 run tf2_ros tf2_echo base_link front_caster gave both addison and me an error message that could be ignored (in my case Invalid frame ID "base_link")
- Yet, with rosgraph I am missing a number of topics and joint_states:
versus Addison's
- First tutorial finished
-
- The 2nd tutorial has also some prerequisites: OpenCV and ROS2 and Pose estimation with a Aruco marker.
- Going directly into Pose estimation with a Aruco marker, and at the end ill try to perform this also with a RAE camera.
- That tutorial has as prerequisite camera calibration.
- Calibrating was outside ROS2, but the 2nd tutorial assumes that the workspace has already an opencv_tools directory, so I will first do Getting started with OpenCV and ROS.
- Did rosdep install -i --from-path src --rosdistro humble. I actually got two error messages:
opencv_tools: Cannot locate rosdep definition for [opencv2]
two_wheeled_robot: Cannot locate rosdep definition for [rviz]
- Updating the dependency to rviz2 in two_wheeled_robot/package.xml works.
- Changing the dependency to python3-opencv in opencv_tools/package.xml (different from the tutorial, inspired by this Medium tutorial solves the dependency issue.
- Note that this code is comparible with the one from cpmr_apb.
- Yet, colcon build --packages-select opencv_tools fails on setup.py:
RuntimeError: 'distutils.core.setup()' was never called -- perhaps 'setup.py' is not a Distutils setup script?
- Looked at cpmr_apb/setup.py, which had some additional imports.
- Replaced the opencv_tools/setup.py with the one from cmpr_apb, still the same error. Is it the package.xml? Anyway, running the code directly works python3 basic_image_publisher.py.
- ws9 doesn't have camera, so the basic_image_publisher fails. The basic_image_subscriber subscribes to ' video_frames', so made a version that listen to.
-
- Started rae_bringup, which seemed to work. But after a while no ros2 topic echo /rae/right/camera_info. At the rae I see the warning:
[component_container-1] [ERROR] [1730295311.876935380] [rae]: No data on logger queue!
- Looked into the memory use of the docker:
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 22610354 19863588 1804431 92% /
tmpfs 1452128 0 1452128 0% /sys/fs/cgroup
tmpfs 1452128 696 1451432 1% /dev/shm
/dev/mmcblk1p11 22610354 19863588 1804431 92% /etc/hosts
- Actually, the docker has two ws, underlay_ws and ws. In underlay_ws there are depth_nav_tools depthai-ros ira_laser_tools ros-gst-bridge. Inside ws is rae-ros, with rae_bringup, rae_camera, rae_description, rae_hw and rae_msgs.
- A large unneccessary file is /opt/ros/humble/share/robot_localization/test/test3.bag
- Another suspious file is /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_jammy_universe_binary-arm64_Packages.lz4.
- Looking for huge files, first with find / -size +100M -ls, followed by find / -size +50M -size -100M -ls
- An unneccary file seems:
/underlay_ws/src/depthai-ros/.git/objects/pack/pack-634cb5efce702364cd1c1045dbeecf3d61bb9954.pack
< - Did git gc, which doing some housekeeping. It compressed some objects, so the pack became smaller:
85591 95100 -r--r--r-- 1 root root 97380731 Oct 30 14:07 /underlay_ws/src/depthai-ros/.git/objects/pack/pack-ce0f273240885bb8f86fdd4bdea036295238d095.pack
- Also did git gc --aggressive, which further reduced the pack:
85587 85804 -r--r--r-- 1 root root 87861821 Oct 30 14:10 /underlay_ws/src/depthai-ros/.git/objects/pack/pack-5d2c2709d63b0fba891c50213bec9d5989b073c1.pack
- The large files in this depthai-ros directory are for instance the mobilenet-ssd_openvino_2021.2_6shave.blob and yolov4_tiny_coco_416x416_openvino_2021.4_6shave_bgr.blob in depthai_examples/resources.
- There are also some large stl files in depthai_descriptions/urdf/models/, such as OAK-D-PRO-W.stl
- Installed apt install ros-humble-rosbag2, followed by ros2 bag info /opt/ros/humble/share/robot_localization/test/test3.bag. That gives:
[ERROR] [1730298366.610744582] [rosbag2_storage]: No storage id specified, and no plugin found that could open URI
No plugin detected that could open file /opt/ros/humble/share/robot_localization/test/test3.bag
- To try that, I could try this official ros1-bridge tutorial. This is a known issue.
- There is a directory /ws/log, which contains some small build-logs.
- There is /var/log directory, with a lot of downloads in .hunter/_Base/Download/Hunter/0.24.18/
- All log-files are /root/.ros/log/.
- Looked at the info from the battery_status. The only relevant info seems to be the power_supply_status.
- With only use_slam:=false, the ekf_node is still running.
- I am checking the parameters with launch -s, and found .
- Running with run_container:=false gives a interesting warning:
[critical] VPU crash dump found, saved to: /tmp/gate_fw_e9wkd43x/485e53f7b26b4c79c89d13afc817c7c279630434c15fb52103ffa5c792c86074/crashDump.json
- Now at least I get capacity info from the battery.
- There is still a container, because I get:
[component_container-1] [INFO] [1730303171.223476893] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
- Had to do sudo apt install ros-humble-image-view to be able to run ros2 run image_view image_view --ros-args -r image:=/rae/right/image_raw. No window shows up.
- Looking at ros2 launch -s rae_camera rae_camera.launch.py, only two option listed (name and params_file).
- Yet, again the VPU error. According to this post, it could be overheating. The VPU issue seems to be only solvable with a hard shutdown. Tried a shutdown now, look if that solved the VPU issue for the moment.
October 29, 2024
- Back at ws9. Did an install of ros-humble-ros-gz. That also installed ros-humble-ros-gz-image ros-humble-ros-gz-sim-demos
ros-humble-sdformat-urdf ros-humble-theora-image-transport.
- Launched ros2 launch cpmr_apb gazebo.launch.py. Still the GazeboRosFactory error.
- Did a apt-cache search humble-ros-gz, but ros-humble-ros-gz-bridge and ros-humble-ros-gz-interfaces were already installed (now also manual).
- The ros-humble-ros-gz-demos are used this tutorial.
- Starting with the launch tutorial.
- At least ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=empty.sdf works. Only get the warning:
ruby $(which ign) gazebo-1] libEGL warning: egl: failed to create dri2 screen
- The command ros2 launch ros_gz_sim gz_sim.launch.py gz_args:=~/var_ws/install/cpmr_apb/share/cpmr_apb/world_demo.sdf also works, although the Depot without the RAE robot.
- Spawning an URDF is even one step before.
- Just simply gz sim empty.sdf doesn't work.
- The command gz help works, but sim and service are no option. Tutorial is for Gazebo Ionic, should look for Fortress.
- At least ign gazebo empty.sdf works (same egl warning). About shows that this indeed Gazebo Fortress (version 6.16.0)
- Also ign service -s /world/empty/create --reqtype ignition.msgs.EntityFactory --reptype ignition.msgs.Boolean --timeout 1000 --req 'sdf_filename: "~/var_ws/install/cpmr_apb/share/cpmr_apb/blockrobot.urdf", name: "blockrobot"' works.
- Started the simulation, but no topics visible in ROS. I see two warnings:
[Err] [SystemLoader.cc:94] Failed to load system plugin [libgazebo_ros_planar_move.so] : couldn't find shared library.
[Err] [SystemLoader.cc:94] Failed to load system plugin [libgazebo_ros_ray_sensor.so] : couldn't find shared library
- The command ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 1.0}}" -1 gives:
Waiting for at least 1 matching subscription(s)...
- Looks like there have to be explicit communication-bridges be defined (ROS2 Integration tutorial.
- The ROS2 Interoperability shows in the 2nd part a vehicle that is driven around. The key seems Setup step 2: 'Configure RViz (and other ROS 2 tools) to control a robot model simulated by a Gazebo world'
- Seems that this demo can be found on github.Tried ros2 launch ros_gz_example_bringup diff_drive.launch.py.
- That package has first to be build.
- Did git clone https://github.com/gazebosim/ros_gz_project_template.git -b fortress
- First export GZ_VERSION=fortress, followed by rosdep install --from-paths src --ignore-src -r -i -y --rosdistro humble which installed ros-humble-joint-state-publisher-gui.
- The four packages where all build without warnings.
- Tried ros2 launch ros_gz_example_bringup diff_drive.launch.py again. Now it works, both a Gazebo and Rviz screen pop up. Both the Rviz and Gazebo log give a number of warnings:
robot_state_publisher-3] [WARN] [1730193423.404660688] [sdformat_urdf]: SDFormat link [lidar_link] has a , but URDF does not support this
[robot_state_publisher-3] [WARN] [1730193423.407752241] [kdl_parser]: The root link chassis has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF.
[robot_state_publisher-3] [WARN] [1730193423.407777451] [kdl_parser]: Converting unknown joint type of joint 'caster_wheel' into a fixed joint
[robot_state_publisher-3] [INFO] [1730193423.407819283] [robot_state_publisher]: Floating joint. Not adding segment from chassis to caster.
[rviz2-4] [WARN] [1730193423.770773850] [sdformat_urdf]: SDFormat link [lidar_link] has a , but URDF does not support this
[parameter_bridge-2] [INFO] [1730193424.407283213] [ros_gz_bridge]: Creating ROS->GZ Bridge: [/diff_drive/cmd_vel (geometry_msgs/msg/Twist) -> /model/diff_drive/cmd_vel (gz.msgs.Twist)] (Lazy 0)
[/model/diff_drive/odometry (gz.msgs.Odometry) -> /diff_drive/odometry (nav_msgs/msg/Odometry)] (Lazy 0)
[parameter_bridge-2] [INFO] [1730193424.419744360] [ros_gz_bridge]: Creating GZ->ROS Bridge: [/scan (gz.msgs.LaserScan) -> /diff_drive/scan (sensor_msgs/msg/LaserScan)] (Lazy 0)
- So, ros2 topic pub /diff_drive/cmd_vel geometry_msgs/msg/Twist "{linear: {x: 1.0}}" -1 works. I had to do ros2 topic pub /diff_drive/cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.0}}" -1 to stop the robot
- The command ros2 run teleop_twist_keyboard teleop_twist_keyboard --ros-args --remap cmd_vel:=/diff_drive/cmd_vel also works.
- I received in the gazebo terminal:
parameter_bridge-2] [INFO] [1730194147.221868950] [ros_gz_bridge]: Passing message from ROS geometry_msgs/msg/Twist to Gazebo gz.msgs.Twist (showing msg only once per type)
- Strange enough, after the drive the /diff_drive/odom Fixed Frame also works, so all rviz displays are OK:
-
- Looked into diff_drive.launch.py. There are actually two sdf loaded, 'models', 'diff_drive', 'model.sdf' and worlds', 'diff_drive.sdf'.
- In 'ros_gz_example_gazebo/worlds/diff_drive.sdf' a merge is done with models/diff_drive. I also see three publishers: JointState, Pose and Odometry
- The model.sdf can be found as ros_gz_example_description/models/diff_drive/model.sdf. This has several joints definitions, including a lidar. It also has some gazebo-plugins. No urdf is loaded. The difference is
vs
- Looking into cpmr_apb. Running python3 populate.py gives:
[INFO] [1730199931.430735960] [populate]: Connecting to /spawn_entity service...
- The spawn-service (no longer) exists, so I tried ign service -s /world/demo/create --reqtype ignition.msgs.EntityFactory --reptype ignition.msgs.Boolean --timeout 1000 --req 'sdf_filename: "/home/arnoud/var_ws/install/cpmr_apb/share/cpmr_apb/blockrobot.urdf", name: "blockrobot"'. That brings the blockrobot into my demo world:
- Yet, I see no wheels, no odometry, no cmd_vel.
- Looked into Entity creation.
- There are other Gazebo 6.9 Tutorial, but many not archived.
October 27, 2024
- Testing the RoboStack solution on WSL on my home-computer, to continue my test from October 10.
- Maybe a good robot to start with is the gazebo_driff_drive_demo from testing integration
- Tried directly after mamba install ros-humble-desktop to do mamba install install ros-humble-gazebo-ros-pkgs, which fails (on ros-humble-gazebo-dev, which requires conflicting versions of boost-cpp and pcl).
- First trying mamba install ros-humble-ros-core ros-humble-geometry2 (both packages were already installed)
- Running gazebo --verbose /opt/ros/foxy/share/gazebo_plugins/worlds/gazebo_ros_diff_drive_demo.world fails on gazebo not installed. Suggestion is to install gazebo9, but install suggest gazebo11.
- Should look at this page for the correct combination (ROS humble can be combined with Gazebo versions (Fortress and Garden and Harmonic).
- Continue with mamba install ros-humble-ros-gz
- That is enough, still no gazebo. On this page Humble and Fortress are recommended.
- If I see this page, gazebo should be installed with mamba install ignition-fortress, but that package cannot be found.
- Looking at Robostack available packages
- First tried mamba install ros-humble-gazebo-ros which fails on dartsim (seems solvable) and libboost-cpp.
- Next tried mamba install gazebo11 and mamba install gazbeo6 both fail.
- As last try I did mamba install ros-humble-ros-gz-sim-demos. That worked. So, try to run the gz-sim-demos tomorrow.
October 25, 2024
- Joey documented nicely the instructions to configure the RAE robots. The instructions to disable RAE's own WIFI host are:
sudo systemctl disable wpa_supplicant.service
- And to re-enable:
sudo systemctl daemon-reload && sudo systemctl enable wpa_supplicant_custom.service
-
- Checked Luxonis rae-ros github. The default branch is humble, so the missing dependencies are their bug.
- Again removed the speaker and mic_node from rae_hw. Now all packages are build.
- Next step is to start the rae_gazebo simulation.
- According to this post, this can be done with ros2 launch rae_gazebo gazebo.launch.py.
- When explicit defining the location of the world_demo.sdf (it was copied to install), I could launch rae_gazebo:
- I could drive around, both with ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 1.0}}" -1 and ros2 run teleop_twist_keyboard teleop_twist_keyboard. (page 5 of my seminar).
-
- Back to the seminar. The package was nicely build, yet it fails on error:
[spawn_entity.py-4] [ERROR] [1729869855.469442625] [urdf_spawner]: Service %s/spawn_entity unavailable. Was Gazebo started with GazeboRosFactory?
[spawn_entity.py-4] [ERROR] [1729869855.469742650] [urdf_spawner]: Spawn service failed. Exiting.
[ERROR] [spawn_entity.py-4]: process has died [pid 2590621, exit code 1, cmd '/opt/ros/humble/lib/gazebo_ros/spawn_entity.py -topic /robot_description -entity block_robot --ros-args -r __node:=urdf_spawner'].
- Tried to add "-s", "libgazebo_ros_init.so", "-s", "libgazebo_ros_factory.so" as argument in the launch-file, as suggested here.
- Better look if I can call the same way as the rae_gazebo.
- Modified the launch-file to var-gazebo.launch.py and changed the sdf in the urdf. Yet, it fails on [Err] [Server.cc:145] SDF file doesn't contain a world. If you wish to spawn a model, use the ResourceSpawner GUI plugin or the 'world//create' service.
- Now ros2 launch cpmr_apb var-gazebo.launch.py spawns the depot, yet, without robot.
- Went into pure gazebo. Could add a path to cpmr_adb, but the blockrobot doesn't appear. Could add a Turtlebot, but that one publishes no topics (and cannot be moved).
October 24, 2024
- Working with RAE #2, which indicates that it is connected to RobotHub.
- With robothub-ctl startup disable you can switch that off.
- The robot is still connected (shows robothub-ctl status:
RobotHub Agent (23.223.1855 | keembay/RAE Version 1.12 | DepthAI 2.19.1)
- Even after obothub-ctl stop the screen is the same, so I did a reboot. Now the rae-logo-white is shown on the display.
-
- Have an account again on the coach-mother machine.
- Checking where the installation fails, and why Joey needs robot-stack.
- Switching back to RAE #3, but seems not to boot (black screen). Maybe problems with PICTURE setting (should check later).
- Back to RAE #1.
- Started ros2 launch rae_bringup robot.launch.py enable_slam_toolbox:=false enable_nav:=false. Get the following warning:
[component_container-1] [INFO] [1729775126.158206843] [rae]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [2024-10-24 13:05:26.178] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[component_container-1] [2024-10-24 13:05:27.904] [warning] DeviceGate upload fwp not successful - status: 500, error: {"message": "Internal Server Error"}
[component_container-1]
[component_container-1] [2024-10-24 13:05:27.904] [error] Could not create the session on gate!
- Added the suggested group-rule, but still:
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process started with pid [274]
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process has finished cleanly [pid 274]
[component_container-1] [ERROR] [1729775891.798176769] [rae]: No available devices (3 connected, but in use)
[component_container-1] [INFO] [1729775891.798389186] [rae]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [2024-10-24 13:18:14.534] [warning] DeviceGate start fwp not successful - status: 401, error: {"error":"Entry point does not exist, make sure FWP was extracted first"}
[component_container-1]
[component_container-1] [2024-10-24 13:18:14.535] [error] Could not start the session on gate!
[component_container-1] [ERROR] [1729775899.539168727] [rae]: Failed to connect to device, error message: X_LINK_DEVICE_NOT_FOUND
- Also tried it without the two enables on false, still same error.
- On May 27 I had a simular problem, which solved with a gpioset. Maybe I should do this outside the docker. The gpioset is needed when using the top USB-port, so not needed normally. Setting gpioset gpiochip0 44=1 freezes the RAE again.
- After the double reboot, the process started nicely. The LED is purple, the last message:
[component_container-1] [INFO] [1729777150.533788411] [battery_node]: Power supply status changed to [Discharging] after 0 h 0 min 5 secs.
- Launched the foxglove-bridge. In the App I have to login. Not sure if fox-glove-studio is installed.
- Did sudo apt install ros-humble-foxglove-bridge
- Was able to drive around (on ws9)
-
- Looking on ws9 at my problems with gazebo on WSL (see October 10).
- Just starting gazebo gives no problems.
- Made a ros_humble_ws, did a git clone https://github.com/luxonis/rae-ros.git in ros_humble_ws/src.
- Yet, this time (following Workspace Tutorial I first did rosdep update, followed by rosdep install -i --from-path src --rosdistro humble -y.
- The last command gave two errors:
rae_camera: Cannot locate rosdep definition for [laserscan_kinect]
rae_hw: Cannot locate rosdep definition for [audio_msgs]
- Changed the dependency to audio_common_msgs in ~/ros_humble_ws/src/rae-ros/rae_hw/package.xml. Still this dependency is not found.
- Put a COLCON_IGNORE in rae-ros/rae-camera, because that has a strong dependence on kinect
- Now the rosdep command works.
October 23, 2024
- Using RAE #3 to test the iotroam connection. Yet, on nb-ros the USB-C connection didn't work, nor in WSL, nor in PowerShell. Should update the manual, to start with the USB-C connection, because each RAE will have another ip.
- RAE #3 has only two scripts, the start_docker.sh (with as suggestion to start robot_launch with the default navigation on). The other adds /opt/intel/codec-unify/lib to LD_LIBRARY_PATH.
- Looking instead at RAE #7BF2881.
- That RAE has many more scripts. The start_docker.sh is more complete, including removing dockers with the same name. Yet, still with the default navigation.
- The script get_info.sh, gives both the MAC and the RAE-ID. The wifi-ip is not given, but is *.*.*.36.
- Run the set_wifi.sh with iotroam as SSID, which gives several warnings like:
Successfully initialized wpa_supplicant
nl80211: kernel reports: Match already configured
...
nl80211: kernel reports: Match already configured
Wi-Fi configuration updated and services restarted successfully.
- Still same IP *.*.*.36.
- Looked in /settings/rae/etc/lcd-show.conf, which gives PICTURE=/usr/share/rae-logo-white.jpg.
- That directory also gives the hostname, which is just rae. The directory also has a wifi.conf, which is SSID_BASE=rae.
- Inside /settings/rae/etc/system/basic.target.wants two servers are started, one charger-rae service and an event-watch service.
- The charger-rae service limits the charging to 1.5A. The event-watch seems to monitor the bottom-button, at least it can activate three /usr/bin/key-* scripts, that do for instance factory-reset and shutdown.
- Looked in the /run/log/journal/*/*.journal. The led-gpios node is actually /soc/spi@201c4000/lcd@0[0], but most interesting is starting the lcd-show.service with the /usr/bin/lcd-show.sh script.
- That script does source /etc/lcd-show.conf, followed by gst-launch-1.0 filesrc location=$PICTURE ! jpegdec ! videoconvert ! fbdevsink. The lcd-show.conf is no link to /settings/rae/etc/lcd-show.conf, but contains the same definition of PICTURE.
- I looked into lcd-show.sh, and I could add ! textoverlay text="Room A" valignment=top halignment=left ! before the videosink, but that has no effect (a once only script, instead of a service). (use :syntax off to remove the vim-colors).
- Editing the script or the jpg-images in /usr/share is not possible (not writeable), but, I could try to modify /etc/lcd-show.conf to display an image in /home/root. Unfortunately, no convert is installed, so not use this trick.
- Try tomorrow if I could use python3 and opencv to do the trick (working on RAE #3).
October 22, 2024
- Cyrill pointed to this nice collection of notebooks of Kalman Filters.
- Note that on slide 38 of the VisualLocalisation lecture Davide points back to DLT (which is probably skipped by Shaodi).
October 21, 2024
- The 4 solutions of two-view geometry can be found at slide 19 of the 3rd tutorial slides.
-
- Looking at Peter Corke's Robotic Vision lectures at Robot Academy.
- Actually, there is a lesson on Finding Line Features, although no code with this lesson.
- Nice is Peter explanation why a polar plot is needed, with Cartesian coordinates the gradient m of v = m . u + c can become infinite.
-
- The RAE stereo-pair has a baseline of 7.5cm, (two OV9782 cameras), so has specs comparible with this OAK-camera:
Ideal range: 40cm - 6m
below 3.5m: below 2% absolute depth error
3.5m - 6.5m: below 4% absolute depth error
6.5m - 9m: below 6% absolute depth error
MaxZ: ~10 meters with a variance of 10%
October 20, 2024
- Should update the Canvas Course Overview, because Line Detection and Visual Homing are still mentioned for Week 2, while I only covered Camera Calibration in that week.
- Most of Davide's 4th lecture should be known, although it ends with Canny Edge. As modern CNN-techniques Davide mentions HED (ICCV 2015)and EDTER (Transformer based - CVPR 2022).
- Recent code that outperforms EDTER is MUGE (CVPR 2024).
- But Edges are no lines (nor is Hough transform discussed).
-
- Lecture 5 concentrates on keypoints, because that are features used in motion estimation. Yet, also here most is covered in CV1. Slide 16 to 23 of Lecture 6 gives a nice example of Automatic Scale selection. Lecture 6 works towards the SIFT Descriptor. Nice is also the explanation of the magic number 0.8 of SIFT Feature matching. At the end of Lecture 6 also the FAST corner detection, SURF, BRIEF and ORB blob detector are introduced. Ends with LIFT and SuperPoint.
- Harris and SIFT were discussed in Lecture 5 of CV1, the other four not. Those algorithms came back in the Bag of Patches of Lecture 8,
- My Lecture 5 did a lot with lines, should look if I can find something back in Corke or Dudek.
- The last CV1 lecture captures Stereo, but Camera Calibration is explained in two slides, the same for Epipolar geometry.
- Davide's Lecture 6 covers Szeliski 7.1 and Corke's section 13.3 (so not Corke's section 13.8 or Chapter 14 or 15).
- Davide's Lecture 7 covers Stereo Vision, and going from a Disparity map to a Point Cloud. Is should cover Szeliski chapter 12 and Corke's Chapter 14.3 (not Corke's section 14.9)
- Davide's Lecture 8 covers Structure from Motion and Epipolar Geometry, including the 8-point algorithm. This covers Szeliski's section 11.3 and Corke's section 14.2 (still no match)
- Davide's Lecture 9 has a nice breaking point at slide 44, where it goes from the general case to robotics. With less than 5 points, you can use motion constraints! Ends with SuperGlue! Sections covered (Szeliski's 8.1.4, 8.3.1, 11.3, Corke's 14.2 - still no match).
- Lecture 10 has a nice overview slide 40, comparing dense, semi-dense (edge based) to sparce methods.
- For further reading Davide points back to his IEEE RAM Visual Odometry Tutorials (part I and part II), although they are from 2011 / 2012. No textbook mention.
- Lecture 11 is good-old Kanade-Lucas-Tomasi (KLT) tracker. No textbook mention. Would be nice to implement KLT on lines instead of corners.
- It would be nice to run this Vanishing Point Estimator code on the first assignment. The code contains a hyperparameter tuning algorithm (will take hours), which mentions a report (not included).
- This ICRA 2021 paper describes such tracker, yet it seems to be a paper without code. Found 2018 C++ implementation.
- Here is ROS1, Ubuntu 18.04, python2 implementation (event based!)
- Found this stereo vision odometry paper from ESA (2016), which is based on this C++ Library.
- Lecture 12a is about place recognition, which mentions Image Retrieval and Visual Words (covered in CV1), although it doesn't zoom in the topological map (could reuse Ben Krose's article).
- Lecture 12b is about Disparity Space Images. Remode is a paper with code, although C++ and ROS Indigo based (Ubuntu 14.04 - including CUDA code). Textbook reference: Szelinksi 12.7 (no match).
- Lecture 13 concentrates on integration with IMU estimates. The first case study (OKVIS - IJRR 2015) is again C++ based (OpenCV and ROS indigo).
- Second case study is SVO+GTSAM (ROS Noetic with Ubuntu 20.04 based: 12% python / 83% C++). How to run is described here.
- The third case study is ROVIO, based on a Extended Kalman Filter. code is ROS1 and C++ based.
- The fourth case is MSCKF, which has several docker files for several ROS versions (including ROS2 - Ubuntu 20.04).
- Yet, this would require camera-imu calibration. This tool is ROS1 based, with Docker support for ROS2.
-
- The 2nd lecture of CV2 covered already camera calibration. At the end it mentions Harris, SIFT and SuperPoint keypoints briefly. No mention of Textbook sections.
- Lecture 3 from CV2 describes three SfM paradigms: incremental, global and hierarchical. No explicit mention of the sections from Szelinski's book (just the book). Points to the CVPR 2017 tutorial.
- Look at the 1st tutorial slides. On slide 26 more efficient versions of RANSAC are listed. On slide 33 they use GIST as global features, in comparison with local SIFT features. Slide 62 highlights the two ways of Image Correspondance: matching and tracking.
On slide 72 the vocabulary tree is mentioned, which is a nice application of a decision tree.
- In the 2nd tutorial slides Marc Pollefeys introduces on slide 27 the Three points perspective pose problem. On slide 46 he recommends to minimize the geometric error, instead of the algebraic error.
- Martin mainly used the slides from the 3rd tutorial slides.
- Look at Dudek's Lecture 9, but that is occupancy map based. Interesting that it points to ROS Navigation for Localization, but not enough detail to be usefull.
October 11, 2024
- Dudek covers Localisation and Mapping in Chapter 9 and 10, but this are precisely the chapters without code.
- There is no code for this chapter in the ROS1 section.
- The second seminar could be based on chapter 2, doing all topics on a point robot. Chapter 2 is a chapter with code.
- Chapter 5 has example code for the Aruco markers, which also can detect APRILTAG_36H11. Maybe I should also print some aruco.DICT_7X7_250 tags. The example code also does aruco.estimatePoseSingleMarkers() after detecting the tags.
- The code with Chapter 6 has a drive_by_line.py. Yet, that is keyboard control, used to train the Reinforcement Learning.
- The OpenCV ArUco tutorial can be found here
- Addison also has a tutorial.
- Addison point to this online ArUc generator). Created 7x7 tags, with Marker size of 250mm.
- Also generated four MIP_36h12 tags (although this one was not in the dictionary of Dudek's aruco_target.py).
-
- Did sudo apt install libgpiod-dev on my WSL, which installed v1.6.3
- That installed /usr/include/gpiod.hpp (and gpiod.h)
- That is also the place where it is installed in my Ubuntu 20.04 partition.
- Removed several of the test_nodes in rae_hw. Also added in build/rae_hw/CMakeFiles/rae_hw.dir/flags.make an additional INCLUDE_DIR, which contained the two gpiod.h* (copied from the local /usr/include).
- Remaining error was in peripherals/lcd.cpp (close not known), which I solved by adding include unistd.h
- Tried ros2 launch rae_gazebo gazebo.launch.py, which fails on missing package robot_localization. Did mamba install ros-humble-robot-localization.
- After that the simulation starts. Note that this is including an ekf_node (from robot_localization)!
- Detailed documentation can be found here (ROS1 noetic based)
- This is the paper: A Generalized Extended Kalman Filter Implementation for the Robot Operating System, IAS-13, 2015. (revised ROS version.
- Starting gazebo for the first time takes some time, ros_gz_sim: Requesting list of world names. Seems like the same problem here, it could be deprecation of fuel.ignitionrobotics.com to fuel.gazebosim.com
- Note that I also have two error messages:
[spawner-5] [ERROR] [1728655369.459672388] [spawner_diff_controller]: Controller manager not available
[spawner-6] [ERROR] [1728655369.462621569] [spawner_joint_state_broadcaster]: Controller manager not available
[ERROR] [spawner-5]: process has died [pid 64879, exit code 1, cmd '/home/arnoud/miniforge3/envs/ros_humble_env/lib/controller_manager/spawner diff_controller -c /controller_manager --ros-args -r __ns:=/'].
[ERROR] [spawner-6]: process has died [pid 64881, exit code 1, cmd '/home/arnoud/miniforge3/envs/ros_humble_env/lib/controller_manager/spawner joint_state_broadcaster --controller-manager /controller_manager --ros-args -r __ns:=/'].
- More serious error:
[ruby $(which ign) gazebo-1] Library error: Problem running [gazebo]() from /home/arnoud/miniforge3/envs/ros_humble_env/lib/libignition-gazebo6-ign.so.6.12.0.
[INFO] [ruby $(which ign) gazebo-1]: process has finished cleanly [pid 64869]
- Adding the general /usr/lib as hint was maybe not a good idea. Try to add the library in a separate lib and add that in the build.dir.
- My modification in rae_hw.dir/flags.make and build.make were gone. Modified the CMakelist.txt so that now the correct include directory is found. The library still fails, but with a (temporary) modification of build.make the CMake warning is gone:
CMake Warning at CMakeLists.txt:78 (add_executable):
Cannot generate a safe runtime search path for target lcd_node because
files in some directories may conflict with libraries in implicit
directories:
runtime library [libyaml-0.so.2] in /home/arnoud/miniforge3/envs/ros_humble_env/lib may be hidden by files in:
/usr/lib/x86_64-linux-gnu
- Still the Library error.
- Looking if I can run python3 aruco_target.py. Node starts without problems, with a subscription on mycamera/image_raw. Started also python3 opencv_camera.py to publish mycamera/image_raw.
- Always, [aruco_target]: No targets found! (tested both AprilTag 36h11-89 and ArUco 7x7-7)
- Time to go on holidays.
October 10, 2024
- I see in the Software infrastructure document from Gregory Dudek the remark that the coke_can model has to be installed.
- In Figure 4 the coke cans are present, I have something equivalent with Figure 1.
- Those 40 cans are added by the command ros2 run cmpr_apb populate.py.
- The model itself can be found at gazebo models.
- The model is also available as Fuel Model.
- Fuel model has a tutorial how to insert a fuel model into Gazebo.
-
- Looking for good perception tutorials for Practical 0.
- Addison Sears-Collins has nice tutorials, but most for ROS1 and manipulation.
- Found pick and place, which uses point-clouds and MoveIt.
-
- Testing the seminar. Starting with mamba activate ros_humble_env, followed by source ~/var_ws/install/setup.bash .
- Starting ros2 launch cpmr_apb gazebo.launch.py goes well.
- The command ros2 run teleop_twist_keyboard teleop_twist_keyboard.py fails on No executable found. Looked into ~/miniforge3/envs/ros_humble_env/lib/python3.10/site-packages/teleop_twist_keyboard.py. Running it directly as python3.10 ./teleop_twist_keyboard.py goes well, so it seems that python is the executable not found. Looked at the documentation: the command is actually ros2 run teleop_twist_keyboard teleop_twist_keyboard. Yet, no movement of the robot, nor do I see the /odom topic.
- Also the /scan is published, but I don't see updates with ros2 topic echo /scan.
- I saw earlier today a simple robot with wheels, maybe use that one.
- Looking into Addison's ROS2 localisation tutorial.
- It starts with create a mobile robot, which is maybe not needed because we have already a rae_gazebo.
- There is no mamba install ros-humble-rae-ros, so trying to install from source in ~/var_ws/src.
- I tried the same on September 17, 2024, although for the rae_description.
- Did source ~/~/ros_humble_ws/install/setup.bash, followed by ros2 launch rae_gazebo gazebo.launch.py, which fails on 'ros_gz_sim' not found
- Did mamba install ros-humble-ros-gz, which brings me slightly further (rae_hw not found). That was on Sep. 17 a COLCON_IGNORE.
- Tried mamba install libgpiod-dev, but doesn't exist. Tried sudo apt install libgpiod-dev, but version (1.4.1-4) was already installed.
- The CMakeList.txt does a find_library(GPIOD_LIBRARY NAMES libgpiodcxx.so), which should find /usr/lib/x86_64-linux-gnu/libgpiodcxx.so. Used the trick from stackoverflow, and added HINTS.
- Next failure is missing audio_msgs.
- A mamba install failed, so did git clone --branch ros2 https://github.com/ros-drivers/audio_common.git, followed by colcon build --packages-select audio_common_msgs and source install/setup.bash.
- Put a COLCON_IGNORE in the other audio_common packages. Yet, still, I get the error the passed package name 'audio_msgs' was not found. That is correct, because I build 'audio_common_msgs'. Changed the dependency in the makefile, still fails. Removed the mic_node and speaker_node for the moment.
- Next missing is ros2_control_test_assets, which is part of ros2-control (17 packages)
- The build now fails in rae_hw/rae_motors.hpp that it cannot include 'gpiod.hpp'
October 9, 2024
- Found a method to get rid of the pantom spaces in my listings (only make the columns flexible, changing the basicstyle removes the underscores).
- Tested the unzipping and launching of the blockrobot on the Ubuntu partition of nb-dual, after mamba activate ros_humble_env.
- Had to install mamba install ros-humble-gazebo-ros, but after that ros2 launch cpmr_apb gazebo.launch.py worked:
-
- Reading the IROS 2019 AprilTag paper.
- The corresponding png-pictures are actually quite tiny.
- Joey uses April Tags generator
- This is a Python tutorial for april-tag.
October 8, 2024
- Following Gregory Dudek's ROS2 introduction on nb-dual.
- The colcon build of cpmr_apb goes well.
- gazebo was not (yet) installed on WSL, so did sudo apt install ros-humble-gazebo-ros.
- Started VcXsrc.
- Launching gazebo in WSL failed on missing soundcard.
October 4, 2024
- The Direct Linear Transform used the SVD() function from Matlab. In Octave the equivalent is svd().
- Numpy has an equivalent function np.linalg.svd() although the option full_matrices is per default true. Looked at the last row of the V-matrix, which is a 12x12 matrix. The last row is equal with the Matlab answer suggested by Davide in his 3rd lecture. Script is saved in ~/onderwijs/VAR, in my WSL partition on nb-dual.
-
- The orginal of the train-movement illusion can be found on page 16.
- There are actual 18 pages.
October 3, 2024
- I used the slides on Image Formation before, should check last year CV1. Otherwise the I could some of the MIT-slides.
October 2, 2024
- Looking at Peter Corke's github for some nice usage of Canny edge.
- Note that for Chapter 12 and 14 also PyTorch and Open3D have to be installed.
- The run example 14.8.3, you have to download two image sequences of 100MB each.
October 1, 2024
- Looking for Lab 1-2, to the Open Class.
- The Open Class uses the centroid of the image found by thresholding in HSV space.
- This solution is too simple, going to gray-scale and taking the center-of-mass.
- This solution at least takes the contour0data from the biggest contour by thresholding bgr-values.
-
- This is list of all ROS2 Humble packages.
- Searching for different tags. dolly_follow is based on laser-scans.
- Leo robot has an line-following example (ROS1 based). The color_mask is HSV based. After preprocessing, the example uses tflite_runtime.interpreter. The CNN used is explained in this notebook. The example is described here, with more details here. Nice Robot, but more expensive than a Nao robot.
- Looked into the example. roslaunch leo_example_line_follower color_mask.launch subscribes to camera/image_raw, and uses config/blue.yaml to create a binary image. In rqt you can interactively choose a good color range. The line-follower comes with two provided models (color_mask_model.tflite and smaller_mask.tflite), both equally large.
- Looked with the experimental TFLite Analyser what is inside the models.
- The next step is to train your own model with some recordings, at kaggle.
- There one can see that a CNN with 5 convolutional layers are used.
September 19, 2024
- Read Run to the Source, which advocate the use of containers.
- They gave three examples:
September 17, 2024
- I tried the failed docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble direcly on the RAE again, to see if some of the downloads were cached.
- Seems that already half of the pulls are complete.
- Third time it worked. Started the new docker.
- What is new is that the prompt looks different, it isn't that obvious anymore that you are in the container.
- Next new feature is that the LED becomes now green, and the lcd shows RAE ready. Yet, strange enough I can no longer login to the RAE. Did it change its ip?
- Also a ros2 topic list in my ros_humble_env doesn't show any additional topic.
- The only error that I see with the bringup, is a firmware crash:
[component_container-1] [2024-09-17 08:37:35.106] [warning] Firmware crashed but the environment variable DEPTHAI_CRASHDUMP is not set, the crash dump will not be saved.
[component_container-1] [ERROR] [1726562255.149777561] [rae_container]: Component constructor threw an exception: _Map_base::at
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'rae' of type 'depthai_ros_driver::Camera' in container '/rae_container': Component constructor threw an exception: _Map_base::at
- Couldn't find in my labbook that I did any firmware update, so maybe I should.
- Performed the firmware update and rebooted. Installed inside the container net-tools, which shows that the wifi is still correctly configured to LAB42.
- Note that with two containers, the RAE is now for 90\% full.
- Note that in the new container, zsh is no longer installed. bash as shell works.
- Now I get a extensive ros2 topic list both on the RAE-container and in my ros_humble_env.
- I see the websocket of foxglove launched, but connecting to the ws from Chrome fails (although I allowed to execute unsafe scripts):
Check that the WebSocket server at ws://192.168.197.55:8765 is reachable and supports protocol version foxglove.websocket.v1.
- Yet, rviz2 works, displaying both /rae/right/image_raw and /rae/stereo_back/image_raw.
- The error model fails. The Transforms are OK, but for chassis I get Could not load mesh resource 'package://rae_description/urdf/models/RAE-TOP-ASSY_0417.STL'.
-
- Trying to solve this by installing rae-description package also on the laptop side.
- Made a ros_humble_ws, did a git clone https://github.com/luxonis/rae-ros.git in ros_humble_ws/src, followed by a colcon build --symlink-install.
- Fails on a missing package, so did mamba install ros-humble-camera-info-manager. That helps, rae_description can be build. Next missing package is depthai_bridge. Continue with mamba install ros-humble-depthai-bridge. Didn't work. Followed the instructions from here, and did mamba install ros-humble-depthai-ros. Also not available on RoboStack, so time to install it from source.
- Starting with mamba install libopencv-dev (doesn't exist). Same for mamba install python3-rosdep
- Instead did git clone --recursive https://github.com/luxonis/depthai-core.git --branch main in my ros_humble_ws/src.
- That helps. Still an easy_install warning on rae_sdk, and a missing libgpiod-dev for rae_hw.
- Performing mamba install libgpiod-dev also failed, so did sudo apt install libgpiod-dev (outside the ros_humble_env).
- Still the gpoid library is not found. Instead, tried mamba install libgpiod (which exists). That works, now we are back to missing depthai_bridge.
- That can be found in depthai-ros (before I installed depthai-core)
- After also cloning git clone https://github.com/luxonis/depthai-ros.git, the next missing package is mamba install ros-humble-vision-msgs
- Added a COLCON_IGNORE to src/rae-ros/rae_hw. Now depthai_bridge fails on missing ffmpeg_image_transport_msgs. Did mamba install ros-humble-ffmpeg-image-transport-msgs. That fails. Trying instead mamba install ros-humble-image-transport-msgs. Also fails. Trying mamba install ros-humble-misc-utilities, because it is part of ros-misc-utilities package.
- Instead did a git clone https://github.com/ros-misc-utilities/ffmpeg_image_transport_msgs.git --branch=humble, and the bridge finishes.
- The depthai_examples couldn't be build (looks like a opencv dependence).
- The laptop was out of disk-space, so rebooted and made some space.
- Added a COLCON_IGNORE in ~/ros_humble_ws/src/depthai-ros/depthai_examples
- Also did touch depthai-ros/depthai_ros_driver/COLCON_IGNORE
- Now rae_camera fails on missing rtabmap_slam. Did mamba install ros-humble-rtabmap-ros. That should work, but doesn't, so touch rae-ros/rae_camera/COLCON_IGNORE
- Now the RobotModel is correctly displayed in RVIZ2:
- Note that a sudo apt upgrade updated fox-glove, so maybe it works now.
- When using the right ip-address (and setting the Teleop general top to , the RAE drives.
- The Image panel doesn't work, because there seems to be no calibration (yet, I see the camera_info in the topic list).
- Also at the RAE inside the container, there is only one image msg (without echo). Maybe running the foxglove bridge is too much, or running rtab-map. At least, the bringup gives a warning CPU usage is 99.3%.
-
- Looked into /ws/src/rae-ros/rae_bringup/launch: there are 4 launch files there:
-rw-r--r-- 1 root 900 Mar 7 2024 robot.launch.py
-rw-r--r-- 1 root 2400 Mar 7 2024 bringup.launch.py
-rw-r--r-- 1 root 3835 Mar 7 2024 slam.launch.py
-rw-r--r-- 1 root 5243 Mar 7 2024 rtabmap.launch.py
- From those, robot.launch.py is the smallest.
- In ws/src/rae-ros/rae_camera/launch there are two launch files, actual bigger than robot.launch.py (but smaller than bringup.launch.py.
- In /ws/src/rae-ros/rae_hw/launch there are also two (big) files:
-rw-r--r-- 1 root 5183 Mar 7 2024 control.launch.py
-rw-r--r-- 1 root 3271 Mar 7 2024 control_mock.launch.py
- This launch is descriped in rae-ros Testing motors
- In the rae-ros LED node, peripherals.launch.py is mentioned. That launch-file is no longer there.
- Tried to inspect control.launch.py
, but first had to install vim. Note that 543 packages are ready to be updated (many ros-related).
- Looked at the LCD node examples, and tried python3 battery_status.py. No warnings, but also no update. That seems correct, because it subscribes to battery_status updates, so that node should be launched first.
- Did a ros2 node list, and three nodes are running (including the battery status):
/battery_status_node
/rviz
/transform_listener_impl_559ab7a2c310
- Note that rviz is running on my laptop, not the RAE. ros2 topic echo /battery_status gives no updates, so looking how to restart a node.
- There is ros2 kill, so looked with ros2 lifecycle nodes which gave no managed nodes. Read concept page (2015), which was not explicit how to make your node managed.
- According to this lifecycle demo, the trick is to inherit from LifecycleNode instead of just Node.
- Also python3 led_test.py doesn't give a lot of response. Lets reboot and try again.
-
- After the reboot, the script led_test.py still gives no output. A ros2 node list give only car_demo_node.
- Looked into robot.launch.py. It starts two scripts: rae_camera/rae_camera.lunch.py and rae_hw/control.launch.py. The last one with three arguments: run_container': 'false', 'enable_battery_status': 'true', 'enable_localization': 'true'. The last one can be false, the first one should be checked. The rae_camera.launch.py includes the reset of the pwm, which is a bit strange because that is control related.
- Looked into control. Nice to see that at least the led-node has lifecycle control.
- Looked at the latest , which the first (and the last) included lifecycle control. Copied the file from /ws/src/rae-ros/rae_hw/launch to /ws/install/rae_hw/share/rae_hw/launch/. Launched ros2 launch rae_hw peripherals.launch.py. That launches several nodes, only fails at the led-node:
INFO] [launch]: All log files can be found below /root/.ros/log/2024-09-17-15-31-59-117344-rae-7-713
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [725]
[INFO] [speakers_node-2]: process started with pid [727]
[INFO] [mic_node-3]: process started with pid [729]
[component_container-1] [ERROR] [1726587120.573542903] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'battery_node' of type 'rae_hw::BatteryNode' in container 'rae_container': Could not find requested resource in ament index
[component_container-1] [ERROR] [1726587120.587835932] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'lcd_node' of type 'rae_hw::LCDNode' in container 'rae_container': Could not find requested resource in ament index
[component_container-1] [ERROR] [1726587120.597513229] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'led_node' of type 'rae_hw::LEDNode' in container 'rae_container': Could not find requested resource in ament index
- After this partly failed launch I have x nodes:
/car_demo_node
/launch_ros_713
/mic_node
/rae_container
/speakers_node
- The command ros2 lifecycle nodes gives now two nodes:
/mic_node
/speakers_node
- Looked at the mic_node. Initially ros2 lifecycle get /mic_node:
unconfigured [1]
- Next ros2 lifecycle set /mic_node configure gives:
Transitioning successful
- That is confirmed with ros2 lifecycle get /mic_node:
inactive [2]
- With ros2 topic list is see from the /mic_node only:
/mic_node/transition_event
- Also ros2 lifecycle list /mic_node is interesting, showing the next possible states. Unfortunelly, after shutdown there is no restart. Yet, after shutdown the mic_node is still in the ros2 node list
- Looked in the code of the node. on_shutdown() calls cleanup(), which should give a snd_pcm_close(). Yet, the speakers were not active, so maybe thats why I didn't hear it.
- My commands could be seen in the log of the launch:
[mic_node-3] [INFO] [1726587937.341837139] [mic_node]: Mic node configured!
[mic_node-3] [INFO] [1726588300.750866607] [mic_node]: Mic node activated
[mic_node-3] [INFO] [1726588402.740562268] [mic_node]: Mic node shuttind down!
- Looked at the peripherals.launch.py; the mic-node is launched as seperate LifecycleNode, the battery is a ComposableNode inside a container, which seems to be a plugin (which fails)
- Looked in control.launch.py. The variable run_container is read, but not used. The battery-node and the others are run as LifeCycleNode.
- Modified the peripherals.launch.py script so that it launches the battery-node and led-node in the same way as control (leaving lcd-node for the moment failing inside the container). Still, the led_test.py does nothing, while the battery_status script gives an error:
File "/ws/src/rae-ros/rae_bringup/scripts/battery_status.py", line 98, in listener_callback
led_msg.data = [ColorRGBA(r=0.0, g=0.0, b=0.0, a=0.0)]*40
File "/ws/install/rae_msgs/local/lib/python3.10/dist-packages/rae_msgs/msg/_led_control.py", line 223, in data
assert \
AssertionError: The 'data' field must be a set or sequence and each value of type 'ColorPeriod'
- Yet, the battery-node showed an additional message:
[battery_node-5] [INFO] [1726590364.803195209] [battery_node]: Battery node configured!
[battery_node-5] [INFO] [1726590365.306569899] [battery_node]: Power supply status changed to [Charging] after 0 h 0 min 0 secs.
[battery_node-5] [INFO] [1726590372.591528110] [battery_node]: Battery node activated!
- With the sys_info node active, I see now the topic /cpu (float). ros2 topic echo /cpu gives:
data: 8.699999809265137
---
data: 1.2999999523162842
---
data: 1.0
---
data: 0.699999988079071
---
data: 1.7000000476837158
---
data: 1.2999999523162842
---
data: 1.0
---
data: 1.2999999523162842
---
data: 1.200000047683715
- That is a number that fluctates a lot. I also see /mem, /disk and /net_up.
ros2 topic echo /mem gives more consistent values:
--
data: 46.29999923706055
- Time to go.
September 16, 2024
- Adding the download instructions of the docker-image to the User Manual.
- On my WSL Ubuntu 22.04 the docker installation instructions didn't work.
- On my native Ubuntu 20.04 partition I had a working docker. The command docker pull luxonis/rae-ros-robot:humble downloaded an image of 5.82GB, which was 9 months old.
- Updating the Firmware is documented here. Did a check with mender --version, which indicated:
2.1.2
runtime: go1.14.7
- So, it seems that the latest version of the firmware is already installed.
- Finding the command to download the actual latest version (6 months old) I found the container page
- Deleted some old benchbot images to gain space. Some images (lab42, benchbot/simulator:sim_omni) could not be deleted because a container was dependent on it.
- Executed docker rm -v $(docker ps --filter status=exited -q) , which allowed to remove also lab42 and benchbot/simulator:sim_omni). The remaining old images are quite small.
- Uploading to the RAE gave a client_loop: send disconnect: Broken pipe, so instead I did docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble direcly on the RAE. Kick out again, but just tried again. Download doesn't make much progress, so maybe the disk is full. Or is it better to do it via the UBS-C cable. At least some downloads are complete. Maybe a good moment to go home and let the download do its thing. Most of the download was finished, when I got an unexpecte EOF.
- Running docker save ghcr.io/luxonis/rae-ros:v0.4.0-humble failed (refernce doesn't exist), giving the IMAGE_ID seems to work.
September 13, 2024
- Looked into Jenkin's ROS foxy introduction, although from slide 22 it also covers ROS2, from slide 38 it introduces OpenCV (2 slides only).
- Looks like that the code for the students and instructors is the same. At least the code is from Jan, 18. The ROS2 code is ros-humble based.
- Their lab0 / Tutorial / apb is on driving a block-robot around with teleop in gazebo.
- Chapter 2 does the same, but this time with navigation (at least, goal position are given)
- Chapter 4 adds non-visual sensors (bumper, imu, lidar, sonar)
- Chapter 5 is the one which was usefull for our first assignment.
- Chapter 6 contains drive by line code (for a car)
- Chapter 7 is driving around based on a lidar map.
- Chapter 8 has no readme, but launches a camera-view. Note that Chapter 8 is about system control.
- Chapter 11 is about multiple robots.
- Chapter 12 is about HCI, the code is about using a joystick with ROS2.
September 11, 2024
- Looked into Dudek's book for ROS2 assignments on calibration or line-following.
- Not in the index. Chapter 6 covers Pose Maintenance, but only has Open Questions about landmarks.
- Camera calibration is covered in section 5.1.3.
- Problem 5.10.2 is about line-structures, and should have example code.
- Problem 5.10.3 is about camera calibration.
- Read the tutorial that comes with the code. There code is ROS2 Foxy based, although there are also ROS1 Noetic snippets.
- They start with a Hello World in ROS1, before moving to ROS2.
- The ROS2 code is in ros2_ws workspace, which for ch5 has the 6 example-codes: aruco_target.py good_features.py view_camera.py
canny_edges.py harris_corners.py opencv_camera.py
- Tried running python3 opencv_camera.py, but nothing happens (also no warning / errors. Looks like that it just publishes the webcam on topic '/mycamera/image_raw'. That is right, it can be displayed with ros2 run image_view image_view --ros-args -r image:=/mycamera/image_raw (inside the ros_humble_env). It should be combined with python3 view_camera.py script, provided by Dudek.
- Actually, this combination would be a nice Vision Hello World.
- Made a view_rae_camera.py version, but again nothing happens. A bit strange, because the RAE launch seems succesful, but only a subset of the topics is visble:
/audio_in
/imu/data
/lcd
/mycamera/image_raw
/parameter_events
/rae/imu/data
/rae/right/image_raw/compressed
/rosout
/tf
- Rebooting RAE-3 and try again. Strange enough, receive a battery warning (5% remaining, while the RAE is charing via USB).
- Lets try RAE-1 for the moment. Same problem. Now all topics are visible, but they are gone after a few minutes.
- Was afraid that it was a memory problem. With docker ps -a we saw all container instances which were exited (at least 20). Checked with df -k. The /data was 92% full. After deleting the old containers with docker rm -v $(docker ps --filter status=exited -q) there is the data is 62% full.
- Yet, the docker still crashes. Seems to be a power problem, so my laptop 45W charger to power the system up.
- Positive thing was that my view_rae_camera.py script showed one image (where after the docker crashed again).
- Note that on May 31 I changed the robotapp.toml to v0.4.0. Yet, in the root shell this app is not active (no ROS environment, no ros2 binary).
- Looked at Luxonis Hub. The app was stopped, but when started it directly stops again.
- The commands docker start `docker ps -q -l` and docker attach `docker ps -q -l` are quite handy, because it restarts an exited container, and bring you back into the container. Tried robothub-ctl stop, but that also stops the wifi (as Joey mentioned).
- Inside the container, there are may ros-packages available in /opt/ros/humble/ros2, but the rae-packages can be found at /ws/install/.
- The command ros2 pkg executables shows all executables, including:
camera_calibration cameracalibrator
camera_calibration cameracheck
camera_calibration_parsers convert
depthai_examples feature_tracker
depthai_examples yolov4_spatial_node
rae_bringup battery_status.py
rae_camera camera
- Executed the last one with ros2 run rae_camera camera, which gave the warning:s
[2024-09-11 12:40:58.062] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
- This executable is publishing slightly different topics:
color/camera_info
/color/image
/color/image/compressed
/color/image/compressedDepth
/color/image/theora
/parameter_events
/rae/right_front/camera_info
/rae/right_front/image_raw
/rae/right_front/image_raw/compressed
/rae/right_front/image_raw/compressedDepth
/rae/right_front/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
- The /rae/right_front/image_raw could be displayed, although with a green band on top (and crashed, while charged with 45W).
- Looked into the docker run command, but -v /dev/bus/usb:/dev/bus/usb was already part of the call.
- Strange enough, one terminal shows a running camera nodelet, while in another terminal attached to the still active docker, shows only the default topics.
- Looked into /ws/src/rae-ros/rae_camera/launch, there is also a rae_camera.launch.py and perception_ipc.launch.py. The second calls perception_ipc_rtabmap. The first makes use of the depthai_ros_driver::Camera
- The output of the rae_camera.launch.py is:
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [389]
[component_container-1] [INFO] [1726060174.802235668] [rae_container]: Load Library: /underlay_ws/install/depthai_ros_driver/lib/libdepthai_ros_driver.so
[component_container-1] [INFO] [1726060175.297192658] [rae_container]: Found class: rclcpp_components::NodeFactoryTemplate
[component_container-1] [INFO] [1726060175.297394788] [rae_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate
[component_container-1] [INFO] [1726060175.322210378] [rae]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [2024-09-11 13:09:35.326] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[component_container-1] [INFO] [1726060179.524265695] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[component_container-1] [INFO] [1726060179.524489991] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[component_container-1] [INFO] [1726060179.555126289] [rae]: Device type: RAE
[component_container-1] [INFO] [1726060179.822587784] [rae]: Pipeline type: rae
[component_container-1] [INFO] [1726060180.740068727] [rae]: Finished setting up pipeline.
[component_container-1] [INFO] [1726060181.285472958] [rae]: Camera ready!
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rae' in container '/rae_container'
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-2]: process started with pid [418]
[INFO] [busybox devmem 0x20320180 32 0x00000000-2]: process has finished cleanly [pid 418]
- The topic list is again:
/diagnostics
/parameter_events
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/rosout
- Look in the container on the processes running, I only say component_cont.. using 12.6% of the CPU and 8.1% of the memory.
- Installed bsdmainutils, which gave me /usr/bin/column.
- The command paste <(cat /sys/class/thermal/thermal_zone*/type) <(cat /sys/class/thermal/thermal_zone*/temp) | column -s $'\t' -t | sed 's/\(.\)..$/.\1°C/' gave the temperature of the different zones:
mss 44.5°C
css 47.0°C
nce 45.6°C
soc 47.0°C
bq27441-0 48.7°C
iwlwifi_1 45.0°C
- rae-3 works without problem for Joey (with his face-recognition program, so switching back. Not that the container is 522 packages behind. Maybe update and do a commit?
- Tried first ros2 run rae_bringup battery_status.py, which gives no output.
September 10, 2024
- Started up my RAE.
- Looked up the ip-number in RoboHub (RAE is running agent version 23.223.1855).
- Logged into RAE.
- Created a ~/bin/start_docker.sh.
- Inside the container, I specified ROS_DOMAIN_ID=7, to guarantee that the Coach7 laptop was coupled to the right robot.
- Looked with ros2 topic list which topics were published.
- Did the same at the Coach7 side (after specifing the ROS_DOMAIN_ID=7 there):
/audio_in
/battery_status
/diagnostics
/diff_controller/odom
/diff_controller/transition_event
/dynamic_joint_states
/imu/data
/joint_state_broadcaster/transition_event
/joint_states
/map
/map_metadata
/odometry/filtered
/parameter_events
/pose
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/robot_description
/rosout
/scan
/set_pose
/slam_toolbox/feedback
/slam_toolbox/graph_visualization
/slam_toolbox/scan_visualization
/slam_toolbox/update
/tf
/tf_static
- Had to install sudo apt install ros-humble-image-view. After that ros2 run image_view image_view --ros-args -r image:=/rae/stereo_front/image_raw worked for the raw images, which are indeed published with a low framerate.
- Also tried ros2 run image_view stereo_view --ros-args -r stereo:=/rae/stereo_front/image_raw, which gave:
[INFO] [1725956552.527957143] [stereo_view_node]: Subscribing to:
* /stereo/left/image
* /stereo/right/image
* /stereo/disparity
[WARN] [1725956552.528719471] [stereo_view_node]: defaults topics '/stereo/xxx' have not been remapped! Example command-line usage:
$ ros2 run image_view stereo_view --ros-args -r /stereo/left/image:=/narrow_stereo/left/color_raw -r /stereo/right/image:=/narrow_stereo/right/color_raw -r /stereo/disparity:=/narrow_stereo/disparity
[WARN] [1725956567.528621568] [stereo_view_node]: [stereo_view] Low number of synchronized left/right/disparity triplets received.
Left images received: 0 (topic '/stereo/left/image')
Right images received: 0 (topic '/stereo/right/image')
Disparity images received: 0 (topic '/stereo/disparity')
Synchronized triplets: 0
Possible issues:
* stereo_image_proc is not running.
Does `ros2 node info stereo_view_node` show any connections?
- Yet, it is a bit unneccessary to generate with a stereo_image_proc to generate the disparity, when it is already generated by the RAE itself.
- For the compressed images I installed ros-humble-image-transport-plugins on both sides, but still the raw transport is used.
- Yet, the RAE container is quite old, so I did an update there (also installed apt-util to remove a apt-warning).
- Yet, even after this update, ros2 run image_view image_view --ros-args -r image:=/rae/right/image_raw/compressed -p image_transport:=compressed fails to give results.
- Looked at image_transport_tutorials, and ros2 run image_transport list_transports gives:
Declared transports:
image_transport/compressed
image_transport/compressedDepth
image_transport/raw
image_transport/theora
- Yet, in rviz2 the compressed image can be displayed, so this is a something specific for image_view.
- Gave coach7 to Qi Bi.
-
- Continue with ros-dual, working on the Ubuntu 20.04 partition.
- logged in to rae.
- Edited /etc/hostname from 'rae' to 'rae-7'. Should be visible after reboot.
- Checked the docker I am running on the RAE robot. Is an image from 9 months ago, 5.82 GB.
- Checked the version of the humble containers. v0.0.0 is published 9 months ago. I did a download at October 18, 2023.
- Miniforge was already installed, so did mamba activate ros_humble_env.
- Looked with conda config --env --get channels, which gave two channels:
--add channels 'conda-forge' # lowest priority
--add channels 'robostack-staging' # highest priority
- Installed without problems mamba install ros-humble-desktop
- Continued with mamba install compilers cmake pkg-config make ninja colcon-common-extensions catkin_tools rosdep and mamba install ros-humble-image-transport-plugins
- Also mamba install ros-humble-image-view.
- In the container, I made /etc/ros/scripts/launch_robot.sh, which executes ros2 launch rae_bringup robot.launch.py. Yet, the container crashed, count find the script back.
- So, in the ros_humble_env, I could do ros2 topic list.
- Yet, although the rae side looked OK (purple LEDs burning, last prints also OK:
[component_container-1] [INFO] [1725972043.420819022] [battery_node]: Power supply status changed to [Discharging] after 0 h 0 min 0 secs.
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process started with pid [303]
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process has finished cleanly [pid 303]
- Now image_view gives an black window, and the topic list only the default two. Trying a reboot of RAE.
- Instead of image_view, tried rviz2. An old config is loaded, because also a RobotModel is loaded (and package rae-description is not installed).
- Yet, it still seems that the RAE robot crashed directly.
- According to Joey, he saw that behavior after working too long with robot. Let it cool down and charge solved this instable behavior.
-
- Trying Joey's RAE instead. That is not heated up, and has the latest docker image.
- Had some trouble logging in. The USB-connection only works with adapter inbetween, and its ip changed to from *.*.*.140 to 139.
- Logged in. The docker ps also shows that the image is 9 month old, so how can you check the version?
- Was able to use image_view and rviz2. Also drove around with teleop. Teleop froze for a moment (with a message that battery was on 25%, but continued directly after (still had all topics).
- Strange enough, I can still drive, but receive no images. Should build a system to restart per node. Floxglove should be a good start to see what is running.
- Joey already experimented with Floxglove.
- As indicated, the packages is already installed on the RAE side, so ros2 launch foxglove_bridge foxglove_bridge_launch.xml works without problems.
- Installed the foxglove-studio on nb-ros native. It tries to connect the websocket, but complains that Firefox is used.
- With chrome it goes better, although only a subset of topics is shown. For instance, battery_level cannot displayed. Restarting the nodes on the RAE.
- Seems to be intended behavior, because the foxglove_bridge reports:
[foxglove_bridge-1] [INFO] [1725979333.325492495] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/move_base_simple/goal" (geometry_msgs/PoseStamped) on channel 1
[foxglove_bridge-1] [INFO] [1725979333.332678530] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/clicked_point" (geometry_msgs/PointStamped) on channel 2
[foxglove_bridge-1] [INFO] [1725979333.350200156] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/initialpose" (geometry_msgs/PoseWithCovarianceStamped) on channel 3
- According to foxglove bridge documentation, there is a topic_whitelist.
- Yet, all topics are per default published. After the restart (and setting ROS_DOMAIN_ID=7 in the foxglove-bridge terminal) more channels are added (although some are removed):
[foxglove_bridge-1] [INFO] [1725980706.741722533] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/move_base_simple/goal" (geometry_msgs/PoseStamped) on channel 4
[foxglove_bridge-1] [INFO] [1725980706.760216859] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/clicked_point" (geometry_msgs/PointStamped) on channel 5
[foxglove_bridge-1] [INFO] [1725980706.767273529] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/initialpose" (geometry_msgs/PoseWithCovarianceStamped) on channel 6
[foxglove_bridge-1] [WARN] [1725980706.767956825] [foxglove_bridge]: Some, but not all, publishers on topic '/tf' are offering QoSDurabilityPolicy.TRANSIENT_LOCAL. Falling back to QoSDurabilityPolicy.VOLATILE as it will connect to all publishers
[foxglove_bridge-1] [INFO] [1725980706.768092368] [foxglove_bridge]: Subscribing to topic "/tf" (tf2_msgs/msg/TFMessage) on channel 2
[foxglove_bridge-1] [INFO] [1725980706.787031155] [foxglove_bridge]: Subscribing to topic "/tf_static" (tf2_msgs/msg/TFMessage) on channel 6
[foxglove_bridge-1] [INFO] [1725980722.820361441] [foxglove_bridge]: Removed channel 16 for topic "/cmd_vel" (geometry_msgs/msg/Twist)
[foxglove_bridge-1] [INFO] [1725980722.820630318] [foxglove_bridge]: Removed channel 10 for topic "/joint_state_broadcaster/transition_event" (lifecycle_msgs/msg/TransitionEvent)
[foxglove_bridge-1] [INFO] [1725980722.820692818] [foxglove_bridge]: Removed channel 12 for topic "/diff_controller/transition_event" (lifecycle_msgs/msg/TransitionEvent)
[foxglove_bridge-1] [INFO] [1725980722.820737027] [foxglove_bridge]: Removed channel 13 for topic "/dynamic_joint_states" (control_msgs/msg/DynamicJointState)
- Other channels open when those topics are selected in FoxGlove.
- Both teleop as image worked. The stream is much faster in FoxGlove compared to rviz2.
- Next, looked at Rae-faceTracking.
- The code is without much documentation.
- Cloned the repository, copied the directory in ~/mamba_ws/src (further empty).
- Did a colcon build --symlink-install from my (ros_humble_env). Got a warning: easy_install command is deprecated.
- Sourced the ~/mamba_ws/install/setups.sh, and executed python3 src/Rae-faceTracking/rae_oyster/rae_oyster/pearl_node.py. That gave two eyes on the screen, plus an error message:
File "~/mamba_ws/src/Rae-faceTracking/rae_oyster/rae_oyster/pearl_node.py", line 70, in process_image
face_rects = self.face_cascade.detectMultiScale(frame_gray, 1.3, 5)
cv2.error: OpenCV(4.6.0) /home/conda/feedstock_root/build_artifacts/libopencv_1671406913289/work/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'
- There is also another error-message:
'~/mamba_ws/rae_oyster/resource/haarcascade_frontalface_default.xml
- So, when I start python3 rae_oyster/rae_oyster/pearl_node.py from ~/mamba_ws/src/Rae-faceTracking the facetracking works (including the eyes).
- The install specifies the install requires 'setuptools', 'rclpy', 'image_pipeline', 'face_recognition '. The install also mentions rae_oyster.pearl_node:main.
September 9, 2024
- Checked the provided Ubuntu 22.04 machine.
- Booting halts, but continues after an Enter.
- The machine had no a internet access, so should register the machine at iotroam.
- Also no ifconfig from the net-tools package installed, so checked MAC address with ip link show.
- Running the Dockerfile apt-get install fails on missing packages like: ffmpeg, python3-rosdep, ros-humble-rtabmap-slam.
- Start with installing ros-humble, following install from debs.
- The package locales was already installed. Added en_US as language.
- The package curl was not yet installed. Updated the three update-manager packages that were held back.
- Doing sudo apt install ros-humble-desktop installed 1023 packages.
- Should continue with the Environment setup.
- Added the ros-humble setup to ~/.bashrc. Tested is with ros2 topic list.
- Also run with succes Talker-listener example (which tests both C++ and Python).
- The docker-file only mentions three packages, but including dependencies 47 packages are installed.
-
- Next step is to start the docker file, which requires that docker is installed. That includes 12 new packages. sudo docker run hello-world works.
- Docker daemon runs as root user, so this post-install steps are needed. The first step was not needed, the docker group already existed.
- Still, the image luxonis/rae-ros-robot could not be found.
- Instead, I had to do docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble, as specified here.
- That was step 1-3 of setting rae-ros, which could be skipped because there is already a docker image uploaded to the robot. Could (tomorrow) continue with step 4
August 22, 2024
August 16, 2024
- Section 2.1Szeliski's book goes deeper into the perspective transformations.
July 16, 2024
July 11, 2024
- Ordered two additional RAE robots (free shipping).
July 1, 2024
- The idea for the exercises are currently as follows.
- Follow the ceiling lights (2 weeks)
- Localisation on traffic-cones, playing Curling (2 weeks)
- Conquer the flag in a maze (3 weeks)
- There will be summer-TA and three TAs preparing during period 1.
- The werkcolleges will be 1h on the mathematics, 2nd hour a demo on what is expected in the practicals.
- I will give the lectures in week 1-4, together with Shaodi in the first and last week.
June 27, 2024
- Reconnected to the RAE again, following the steps from May 27.
- Logged in the ip-adress given by the hub
- Last update of rae-ros github is 3 months ago, so I do not have to update.
- Did ros2 launch rae_bringup bringup.launch.py, which should also bringup the slam_toolbox.
- See a lot of error-messages:
[component_container-1] [ERROR] [1719490274.253082247] [rae]: No available devices (3 connected, but in use),br>
[component_container-1] [INFO] [1719490274.253365414] [rae]: No ip/mxid specified, connecting to the next available device.
- Could be an error because the robot is still connected with the USB-C (to charge the battery, although the live view inidcates that the battery is on 100%). Live view depletes the battery fast (down to 92% in a minute). System is frozen.
- When starting ros2 launch rae_bringup robot.launch.py I don't see any errors.
- Restarted nb-dual, because WSL didn't start.
- Logged in to rae, and checked if docker was running with docker ps. Nothing running.
- Started again docker run -it -v /dev/:/dev/ -v /sys/:/sys/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' --network host luxonis/rae-ros-robot:humble.
- The topic are not visible in my WSL Ubuntu 22 (with ROS2 Humble), because WSL2 is using another ip-adress.
- Looked at the suggestion given in this discussion.
- Hyper-V was not active on my machine, so followed first the instructions for Windows 10. The first two options (via PowerShell) failed, via settings I got a warning about .NET Framework 3.5, which was indeed not installed (and was activated in the Hyper-V instructions). Also selected that component and tried the installation again.
- After the reboot the Dot-Net Framework 3.5 was installed, but the Hyper-V component not. Tried again, but received the same error.
- Could better try to get the ros-bridge working (natively). See tutorial.
- Before I go there, a last attempt. WSL networking suggest that Mirrored mode networking is possible.
- The WSL manual suggest that .wslconfig should be modified.
- Yet, the setting networkingMode=mirrored has a footnote only for Windows11.
- Another option could be port forwarding, with netsh, as suggested WLS networking documentation
- This long discussion suggested two option (use WSL1 or use port forwarding script). The 2nd option is difficult, because WLS2 will use random ports.
- Trying the suggested backup and restart as WLS1.
- Importing the backup gave an ACCESSDENIED error.
- Switching the version didn't work, because I had done wsl -t Ubuntu-22.04 followed by wsl --unregister Ubuntu-22.04.
- Instead, did wsl --set-default-version 1 followed by wsl --install Ubuntu-22.04. That gave:
Ubuntu 22.04 LTS is al geïnstalleerd.
Ubuntu 22.04 LTS starten...
- Had to install net-tools, but after that I could check that I have now a WLS1 Ubuntu22.04 version which can connect to LAB42.
- Installed the key-ring and did sudo apt update.
- Did sudo apt install ros-humble-desktop (3Gb).
- Could still login to rae, although the docker was already gone.
- Started the docker and the script, but got several errors that device was already in use. Decided for a reboot of RAE. Still, the same errors.
- Connrected to the LiveView of the Luxonis hub. Now I get both front- and back-image streams, plus I could control the wheels.
- My fault that I was trying the whole bringup, instead of just the robot
- The launch starts (although still some microphone / PCM errors). Yet, I do not see them on WSL1, while I can ping the robot's ip.
- Another option is to use a Discovery server, as this tutorial.
- Rebooted the RAE for a clean start. Running fastdds discovery --server-id 0 --port 11888 in the container.
- Now ros2 topic list gives only the /rosout both in the container and on my WSL1.
- Also did the trick with the SUPER_CLIENT config. Now ros2 topic list shows the /msg topic on both sides, yet ros2 topic echo /msg shows nothing.
- Reset the Windows system-time, which also updates the WSL1 time. Still no echo.
-
- In the mean-time, what should be the assignments.
- Start with camera calibration, including back-projection of a cube on a chessboard. Could be the standard OpenCV chessboard, or the charuco from Luxonis (one week).
June 21, 2024
- Looking at missing functions like gluQuadricNormals. It was mentioned in the OpenGL 2.1 reference manual.
- FreeGlut was started in 1999, because GLUT became old. The current version is 3.6.0, but the apt install says that 2.8.1 is the latest version.
- Looked with ldd ./scenelib2/libscenelib2.so | grep gl and /lib/x86_64-linux-gnu/libglut.so.3 is included.
- Looked inside /lib/x86_64-linux-gnu/libglut.so.3, but it only contains glut-functions, not glu-functions.
-
- Trying to install SceneLib2 on Tunis (Ubuntu 12.04)
- Also trying an older version (--branch v0.3).
- Had to modify the c++ standard to -std=c++0x in the CMakelists.txt of SceneLib2.
- Installed opencv 2.3 (instead of expected 2.4.2) with sudo apt-get install libcv2.3 libcv-dev libopencv-dev . Only gave an error with the usbcamgrabber, with the YUV 422 color. Commented three lines out.
- The executable MonoSlamSceneLib1 is build.
- Run the executable on the TestSeqMonoSLAM. When I do 'Display Trajectory' and 'Toggle Tracking' nice things happen.
- Downloading the smallest (#13 - 2Gb) of the outdoor_rides from the frodobot-2k-dataset.
-
- While downloading I looked if I could do the same trick with v0.3 of Pangolin at nb-dual.
- When doing cmake .. I get the warning that there are two OpenGL libraries available libGL.so libOpenGL.so:
OpenGL_GL_PREFERENCE has not been set to "GLVND" or "LEGACY", so for
compatibility with CMake 3.10 and below the legacy GL library will be used.
- Pangolin v0.8 is the first that can be build on nb-dual. With this version also SceneLib2 can be build.
- Run MonoSlamSceneLib2 on nb-dual, on the Ubuntu 22.04 WSL partition:
June 20, 2024
- Read the remainder of Chapter 14 of Peter Corke. The chapter covers both sparse and dense stereo, disparity and image rectification.
It also covers point-clouds. At the end three applications are describedL: perspective correction, image mosaicing and visual odometry.
- In that light the exercises are not very robot-oriented. The only exception is assignment 14.3, on visual odometry.
- For MonoSLAM Peter points to SceneLib (May 2006). Andrew Davison points to the reimplementation SceneLib2 (2012).
- The group also have updated the algorithms, such as CodeSLAM. Yet, it seems a paper without code.
- Looking around, I found this LSD-SLAM, which is supports MonoSLAM (but requires a cameraCalibration).
- An extension of LSD is DSO-SLAM, including ROS wrapper. Not only needs a camera calibration, but also gamma-correction and vignette.
-
- It would be interesting to see if SceneLib2 still works for Ubuntu 22.04 (originally Ubuntu 12.04 / 14.04) and if it also works for the dataset of Earth Rover challenge.
- Looking at my WSL partition on nb-dual.
- For Step 1 no new packages had to be installed, I had them all as latest version.
- Pangolin had to be installed from source
- The Pangolin install_prerequisites.sh script installed 4 packages libc++-14-dev libc++1-14 libc++abi1-14 libunwind-14 libunwind-14-dev.
- The python installation gives some problems. It tries to install pypangolin==0.9.1, which uninstalls pybind11 (v2.12.0), pillow (v10.3.0), numpy (v1.26.4), and installs numpy-2.0.0, which gives some incompatible problems with this version of numpy. Yet, it ends with Successfully installed.
- All 15 tests were succesfully passed.
- Back to cloning the source code of SceneLib2 with git clone --recursive https://github.com/hanmekim/SceneLib2.git
- Cmake failed on /home/arnoud/git/Pangolin/build/PangolinConfig.cmake, because Eigen3 could not be found.
- The trick to add this discussion to add REQUIRED NO_MODULE worked.
- Yet, the make fails horrible on Pangolin/components/pango_core/include/sigslot/signal.hpp
- Used the trick from this discussion and updated the C++ version in the CMakeLists.txt from SceneLib2 from std=c++11 to c++14.
- Next are some version problems of OpenCV in the usbcamgrabber.cpp . Upgraded the calls from cvtColor() and resize(). The framegrabber works.
- Next were some old calls on HandleInput(), with I replaced with !ShouldQuit().
- Next -lBoost::thread could not be found. Adding set(Boost_USE_STATIC_LIBS ON) bringed me further, but when building shared library libscenelib2.so I now get a bad value for /usr/lib/x86_64-linux-gnu/libboost_thread.a
- In sceneLib2/CmakeLists.txt the Boost dependencies are defined. That library is build without problems. Maybe should also add that Boost dependency in examples/CmakeLists.txt. That works for Boost. Now some glu-functions are missing.
June 19, 2024
June 6, 2024
- Finished the remainder of Chapter 13 of Peter Corke's book. In exercise 5 the calibration of section 13.2.1 is redone. Would be nice to do that for the RAE.
- Tried the suggested python script by Luxonis, but the wsl subcommand doesn't exist anymore.
- I am not the only one with this problem, although SpudTheBot describes a solution. The camera reboots, so it gets a new busid. So, time to write an updated version of the suggested script.
- No success, time to go native. Repeated python3 ColorCamera/rgb_preview.py on my Ubuntu 20.04 partition, but same error. Try again, without adapter in between.
- Note that while installing the requirements, I got the error-message:
machinevision-toolbox-python 0.9.6 requires opencv-python, which is not installed.
robothub-sdk 0.0.3 requires opencv-python, which is not installed.
I think that the actual dependence is on opencv-python3, and it doesn't affect the demo.
- The rgb-demo works!
- Continue with Luxonis calibration guide.
- Did a git submodule update --init --recursive, which updated the boards-directory.
- The command python3 install_requirements.py updated opencv_contrib_python (v4.5.5.62) and depthai (v2.21.2.0).
- Next step would be to display a Charuco board on a TV screen. Could use nb-ros and the Mitsubishi screen (42 inch) for that (tomorrow).
June 5, 2024
- Build a 3D calibration cube from three chessboard patterns.
- Followed the instructions of Luxonis to install it on WSL2
- From usbipd, I downloaded v4.2.0
- Followed WSL support tricks.
- In a PowerShell (admin), usbipd list shows the Movidius MyraidX (not shared).
- In my case I had to do usbipd bind --busid 6-4. Now the device is Shared.
- Next is usbipd attach --wsl --busid 6-4 in a new PowerShell (user). Now the device is Attached. Note that the attached indicated that I can use IP address 172.21.224.1 to reach the host.
- I can see the device with lsusb in WSL. No further client-side tooling are required.
- No need of python3 -m pip install depthai, that library was already installed.
- Still had to do git clone https://github.com/luxonis/depthai-python.git
- Moved to depthai-python/examples. python3 install_requirements.py installed several packages, including opencv 4.10.
- Yet, python3 ColorCamera/rgb_preview.py fails. The device is also no longer visible with lsusb.
- In PowerShell the device is no longer Attached, only shared. The busid changed, but the device is gone again when tried a 2nd time. Python script fails with RuntimeError: Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND.
June 4, 2024
- Found an Inria paper (2001) which at least uses the 3D calibration target.
- This paper from 2018 uses the same 3D calibration target, based on applying 3 times Zhengyou Zhang's method.
- This 1997 paper uses a 2 surface target with squares instead of circles.
- The same 2 surface target is used paper from 1992.
- This 2022 paper uses a 3D checkerboard instead of circles.
- The point to several github pages.
- This repository combines a Velodyne VLP-16 with a Depth camera, including video-tutorial.
- This repository has not only a calibration-part, but also can simulate a 2D-laser from the 3D point cloud (7y old).
- The last repository is from 2022, specialised for autonomous vehicles, and uses a new proposed calibration target:
- This Structure from Motion lecture from 2008 uses the 3D circle pattern. Also points back to Zhengyou Zang's method.
- As alternative, he points to Criminisi, Reid and Zisserman, where I recognize the wooden hut still used by Scaramuza.
June 3, 2024
- Read the OpenCV blog on camera calibration. Included a nice image of the calibration pattern of the Mars rover, including patches for color and size:
- It also points to calib.io, which allows to print other calibration patterns than the standard checkerboard (and allows to add finder markers and radon checkers).
-
- To get Rerun working on WLS2, I also should install the additional packages with sudo apt-get -y install \
libvulkan1 \
libxcb-randr0 \
mesa-vulkan-drivers \
adwaita-icon-theme-full, according to the rerun troubleshooting.
-
- Reading Chapter 13 of Peter Corke's book. Its about Image Formation, with in section 13.1.5 the projection of a cube what was the first assignment in Zurich's course.
- Section 13.2.1 does the calibration (easy to understand) with a 3D target.
- Looking for this 3D calibration target, but seems not to be popular anymore.
- The paper Camera Calibration Methods Evaluation Procedure
for Images Rectification and 3D Reconstruction (2008) points back to the Faugeras-Toscani method, (see The Calibration Problem for Stereoscopic Vision (1989).
- Figure 13.10 with the 3D calibration target is from Fabien Spindler.
- This PhD thesis on Pose estimation of rigid objects and robots (2023) point to Spindler's Pose Estimation for
Augmented Reality: A Hands-On Survey, which is a nice overview paper. But not with the target.
- In 2018 Fabien Spindler gave a tutorial at ICRA, on object tracking.
- THe tutorial points back to code of the blob-tracking tutorial (C++) and pose estimation tutorial (also C++).
- Peter Corke's figures are coming from the Camera calibration tutorial (flat pattern, not the 3D pattern).
- Ther is also using ViSP on Naoqi tutorial!
- And a Bridge over OpenCV tutorial
- I like the Image projection tutorial, which is the first Zurich assignment, but now for only 4 circular points.
- There is also a tutorial on tracking with RGB-D camera
- Which is even more fun when you use your own cube with an AprilTag.
- There is also a ros1 bridge
- The ROS Rolling version seems to be ROS2 compatible, although the documentation is not update completly. Yet, the installation instructions mention ros-humble-visp.
May 31, 2024
- Looking into Dockerfile, which should be updated first.
- Working in the container on RAE.
- Most packages were up-to-date, except ros-humble-rmw-cyclonedds-cpp, which also installs acl libacl1-dev libattr1-dev ros-humble-cyclonedds ros-humble-iceoryx-binding-c ros-humble-iceoryx-hoofs
ros-humble-iceoryx-posh (Docker uses --no-install-recommends)
- Also gstreamer1.0-plugins-bad needed an update, which also installed libgstreamer-plugins-bad1.0-0 (485 packages to be updated).
- Also ros-humble-rtabmap-slam needed an update (no side-effects)
- For one reason the package unzip was not installed.
- Installing ffmpeg also installed libavdevice58 libavfilter7 libcdio-cdda2 libcdio-paranoia2 libcdio19 libmysofa1 libpocketsphinx3 libpostproc55
librubberband2 libsphinxbase3 libvidstab1.1 libzimg2 pocketsphinx-en-us
- Also ros-humble-image-proc and git were not installed
- Installing libsndfile1-dev also installed libflac-dev libopus-dev libvorbis-dev
- Running pip3 didn't work, first had to install python3-pip, which also installed python3-wheel.
- Lost the connection, the state of the docker was not saved.
- Again did git clone https://github.com/luxonis/rae-ros.git and tried MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_hw, which failed on SNDFILE_INCLUDE_DIR-NOTFOUND
- Installed apt-get install libsndfile1-dev libsndfile1, as in the Dockerfile.
- While building rae_hw, I get hte warning that rae_msgs is in the workspace, but is used from the following location instead: /ws/install/rae_msgs.
- Yet, the connection broke again before the package was finished. Tried again. The make failed on /home/root/git/rae-ros/rae_hw/include/rae_hw/peripherals/mic.hpp:5:10: fatal error: rae_msgs/srv/record_audio.hpp
- So, tried MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_msgs first. That package is build without problems.
- The rae_hw still fails on building the mic_node. Commented this node out from the CMakeList.txt
- Next node to fail is the speaker_node, which fails on request->file_location and request->gain in rae_msgs::srv::PlayAudio_Request
- Afraid that I should have done first source ~/git/rae-ros/install/setup.sh before building the rae_hw package, sothat the correct rae_msg are loaded.
- Made sure that this script worked, by creating install/rae_hw/share/rae_hw/local_setup.sh (copy from rae_msgs) and install/_local_setup_util.py (copy of _local_setup_util_sh.py)
- Still the gain-error. Also removed the speaker_node for the moment.
- Next to fail is the led_node, because from the ros package std_msgs the member color is missing.
- Upgrading all 485 packages, including several ros-humble-*-msgs.
- I also see python3-rosdistro-modules.
- The packages ros-humble-rtabmap and ros-humble-rtabmap-util were kept pack, so installed them manually.
- Still the same error for the led_node. Also commented out this node in CMakeList.txt
- That worked, only one warning on a Unused variable.
-
- Continued with git clone --branch rvc3_calibration https://github.com/luxonis/depthai.git.
- Did apt-get install python3-pip (still in the container of RAE).
- Next is python3 install_requirements.py. That for instance installs opencv_contrib_python 4.5.5 and numpy 1.26.4.
- Note that I should add /root/.local/bin to PATH. Kicked out of the RAE again.
- Tried again. First build rae_msgs (after installing libsndfile1). Takes 3m42s
- Had to _local_setup_util.py, but thereafter source install/setup.sh works.
- Alos the build of rae_hw goes now better. After 9min32 its already completed the build, with only three warnings on unused-variables. Had to add some links to start-up scripts. Started ros2 launch rae_bringup robot.launch.py from the container. Got some errors of no available containers. Should have run source entrypoint.sh first. Kicked out of the container (not of RAE).
-
- Time for lunch.
- Tempted to create a new user (humble), give it sudo-rights and try to install depthai there.
- Yet, no apt, no apt-get, no git.
- Yet, dpkg and wget are there. '
- Looked with /etc/os-release. The codename is "dunfell", the pretty name "Luxonis OS 1.14". Actually, dunfell seems to be a LTS version from the Yocto project. Yet, dunfell had python 3.5 and tar 1.28, while Luxonis OS 1.14 is based on python 3.8.13 and tar 1.32.
- Looked at the Luxonis container page, and saw that there is also a v0.4.0
- Changed /home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml from v0.2.2 to v0.4.0 and rebooted
-
- The Docker also is build with Spectacular AI.
- Should try some of the examples
- Cannot see the difference after the reboot, both uname -a and more /etc/os-release gave the same output.
- Connected another terminal to the container. Still ros2 launch rae_bringup robot.launch.py fails
- A quite recent container is rae-cross-compile.
- On WSL Ubuntu 22.04, I did docker pull ghcr.io/luxonis/rae-cross-compile:dev_0.4.0_humble.
- Entered the rae-cross-compile container at WSL, but colcon is not known, and python3-colcon-common-extension not known.
-
- Instead, on WSL, I did git clone --branch rvc3_calibration https://github.com/luxonis/depthai.git.
- Run python3 install_requirements.py. Running the deptai_demo.py doesn't work (no OAK device connected to USB), but from rae_sdk.robot import Robot fails from missing rclpy (so rae_sdk is found).
- Sourced again source /opt/ros/humble/setup.bash. rcply now works, it fails now on missing rae_msgs.
- Build the rae_msg from the github. After running install/setup.bash, I see /home/arnoud/git/rae-ros/install/rae_msgs/local/lib/python3.10/dist-packages at the front of PYTHONPATH.
- Next failure is missing ffmpeg. Did python3 -m pip install openai ffmpeg-python.
- Next failure is missing depthai_ros_py_bindings. Trying again with making rae_hw. That fails on Cmake cannot find an internal package.
- Instead, I did git clone https://github.com/luxonis/depthai-ros.git. Looked at the documentation. Instead, did sudo apt install ros-humble-depthai-ros. If I want to build my own docker image, I can look this documentation
- Looking around at RAE container, but couldn't find the ros-py-bindings. Installed apt install python3-pip again, and tried to do python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai. Complained that the extra-index-url is not correctly installed, but now I can do iport depthai I still get a warning on USB protocol not available.
- Tried to run python3 yolov4_publisher.launch.py, which gave no errors. Same for stereo_inertial_node.launch.py.
- The command ros2 launch depthai_filters example_seg_overlay.launch.py starts. Some nodes crashes (no available devices), but I get at least the topics /joint_states
/parameter_events
/robot_description
/rosout.
May 27, 2024
- Started the RAE, was very happy with the startup sound.
- Checked that the Default App is running at Luxonis control hub.
- Logged in the ip-adress given by the hub.
- Started docker run -it -v /dev/:/dev/ -v /sys/:/sys/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' --network host luxonis/rae-ros-robot:humble.
- Did a dmesg | tail, which gives alloc_contig_range: [a6800, a68e1) PFNs busy, no idea if that is good or bad.
- Again, the command ros2 launch rae_bringup robot.launch.py seems to fail:
[ros2_control_node-5] [INFO] [1716817013.098112201] [resource_manager]: Initialize hardware 'RAE'
[ros2_control_node-5] terminate called after throwing an instance of 'std::system_error'
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
[mic_node-8] [FATAL] [1716817013.743320487] [mic_node]: Unable to open PCM device: Device or resource busy
[mic_node-8] [INFO] [1716817013.744430406] [mic_node]: Mic node running!
[mic_node-8] mic_node: pcm.c:1636: snd_pcm_readi: Assertion `pcm' failed.
[rae]: No available devices (2 connected, but in use)
- Because also the mic-node fails, maybe my attempt to give a startup sound failed.
- Removed the crontab, by replacing the command with an empty file (in /var/spool/cron/crontabs). The command crontab -l gives no output (also no warning).
- Rebooted the system, still says ping (from /etc/rc.local).
- That partly helps, the LED node is now running (ring becomes purple). Yet, the MIC node and control node still fails
- Removed also /etc/rc.local. No ping anymore.
- That helped, now the PGM could be reset:
[mic_node-8] [INFO] [1716819090.305976605] [mic_node]: Mic node running!
[ros2_control_node-5] [INFO] [1716819090.325092897] [resource_manager]: Initialize hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.329573230] [resource_manager]: Successful initialization of hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.329928647] [resource_manager]: 'configure' hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331642272] [resource_manager]: Successful 'configure' of hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331757355] [resource_manager]: 'activate' hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331794897] [resource_manager]: Successful 'activate' of hardware 'RAE'
[component_container-1] [INFO] [1716819090.595384147] [battery_node]: Battery node running!
[ros2_control_node-5] [INFO] [1716819094.063310815] [controller_manager]: Loading controller 'diff_controller'
[spawner-6] [INFO] [1716819094.783641232] [spawner_diff_controller]: Loaded diff_controller
[ros2_control_node-5] [INFO] [1716819094.793637357] [controller_manager]: Configuring controller 'diff_controller'
[spawner-6] [INFO] [1716819094.880909982] [spawner_diff_controller]: Configured and activated diff_controller
[INFO] [spawner-6]: process has finished cleanly [pid 162]
[ros2_control_node-5] [INFO] [1716819095.976911733] [controller_manager]: Loading controller 'joint_state_broadcaster'
[spawner-7] [INFO] [1716819096.100183902] [spawner_joint_state_broadcaster]: Loaded joint_state_broadcaster
[ros2_control_node-5] [INFO] [1716819096.112847773] [controller_manager]: Configuring controller 'joint_state_broadcaster'
[ros2_control_node-5] [INFO] [1716819096.113834039] [joint_state_broadcaster]: 'joints' or 'interfaces' parameter is empty. All available state interfaces will be published
[spawner-7] [INFO] [1716819096.201486699] [spawner_joint_state_broadcaster]: Configured and activated joint_state_broadcaster
[INFO] [spawner-7]: process has finished cleanly [pid 164]
[component_container-1] [INFO] [1716819098.354302116] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[component_container-1] [INFO] [1716819098.354530036] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[component_container-1] [INFO] [1716819098.386023148] [rae]: Device type: RAE
[component_container-1] [INFO] [1716819098.677438728] [rae]: Pipeline type: rae
[component_container-1] [INFO] [1716819100.112680251] [rae]: Finished setting up pipeline.
[component_container-1] [INFO] [1716819101.050747669] [rae]: Camera ready!
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process started with pid [283]
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process has finished cleanly [pid 283]
[component_container-1] [INFO] [1716819106.099834032] [battery_node]: Battery capacity: 30.000000, Status: [Discharging] for 0 h 0 min 5 s. Time since last log: 0 h 0 min 15 secs.
- Yet, still warnings:
[component_container-1] [2024-05-27 14:11:31.049] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[component_container-1] [WARN] [1716819281.600596252] [battery_node]: Battery status low! Current capacity: 25.000000
- The USB-protocol could maybe be activated with the commands at the end of rae-ros documentation: gpioset gpiochip0 44=1, echo host > /sys/kernel/debug/usb/34000000.dwc3/mode.
-
- In another terminal, I attached to the running container image with docker exec -it CONTAINER_ID zsh (CONTAINER_ID can be found with docker ps).
- The LED-ring goes off, but I still see all topics published, including cmd_vel, rae/imu/data and /imu/data, a depth_image, /rae/left_back/image_raw and /rae/right/image_raw.
-
- Running ros2 run teleop_twist_keyboard teleop_twist_keyboard in the container seems to work (mounted on a mug).
- The python interface doesn't work inside the container.
- Instead, went to WSL Ubuntu 22.04's ~/git/rae-ros/rae_sdk. Did there python3 setup.py build, followed by python3 setup.py install --user
- That helps, because in python 3.1012 the call from rae_sdk.robot import Robot now fails on ModuleNotFoundError: No module named 'rclpy'
- That can be solved by source /opt/ros/humble/setup.bash. Now the import fails on ModuleNotFoundError: No module named 'rae_msgs'.
- Looked into ../rae_msgs, but no python modules there.
- Back to the container on RAE. Made a /home/root, created /home/root/git, git clone https://github.com/luxonis/rae-ros.git. In the ~/git/rae-ros/rae_sdk directory, I did again python3 setup.py build, followed by python3 setup.py install
- This gave a new error message:
File "/home/root/git/rae-ros/rae_sdk/rae_sdk/robot/led.py", line 2, in
from rae_msgs.msg import LEDControl, ColorPeriod
ImportError: cannot import name 'ColorPeriod' from 'rae_msgs.msg' (/ws/install/rae_msgs/local/lib/python3.10/dist-packages/rae_msgs/msg/__init__.py)
- Tried to do git clone https://github.com/luxonis/rae-ros.git --depth 1 --branch v0.2.2-humble to get a version that corresponds with version specified as image in the Default app. Yet, checking led.py with blame shows that the ColorPeriod was always there.
- When I look here, ColorPeriod.msg
- Checked out the latest version again with git clone https://github.com/luxonis/rae-ros.git and tried the suggested in development on docker-section of the documentation: MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_hw. The build fails on SNDFILE_INCLUDE_DIR-NOTFOUND. Instead did the same command, but now for rae_msgs. That build finished without any problems.
- Tried to do source install/setup.bash, but that is looking for /home/root/git/rae-ros/local_setup.bash. Going into the install-directory doesn't help, now /opt/ros/humble/_local_setup_util_sh.py is not found.
- Yet, it has effect, in Python3 from rae_sdk.robot import Robot now fails on 'No module named 'depthai'.
- Did apt install python3-pip, followed by python3 -m pip install depthai. v2.26.0.0 is installed.
- Next missing module is 'depthai_ros_py_bindings'.
- That module cannot be installed with . Found this post on depthai examples on the RAE.
- According to , it should be apt install ros-humble-depthai-ros. Got some errors on arm64 architecture, with the suggestion to do apt update first. That works, but still the bindings cannot be found.
- There are 479 packages to be updated, but that will take a while. Maybe better to the Docker instructions one-by-one
May 24, 2024
- Should start the docker with the commands found rae-ros documentation
- Looked at the github, the latest update was on March 28 (issue #75, imu correction).
- Tried docker buildx build --platform arm64 --build-arg USE_RVIZ=0 --build-arg SIM=0 --build-
arg ROS_DISTRO=humble --build-arg CORE_NUM=10 -f Dockerfile --squash -t rae-ros-robot:humble-imu-corr --load .
- First had to install docker at WSL Ubuntu 22.04
- Although I did sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes, still docker buildx build --platform arm64 gave unknown flag: --platform
- Tried to skip the first two steps by just doing docker pull luxonis/rae-ros-robot:humble but even that failed (no permision on the socket, maybe becuause it did the first docker run with sudo rights.
- Doing an upgrade on all pending packages.
- Applied this trick (adding myself to the docker group).
- Now docker pull luxonis/rae-ros-robot:humble works. Actually, that docker image is one build on 29 Nov 2023, so quite old.
- Next step is to copy that file to rae. The command to do that is specified in step 3, but from this post I added | bzip2 | pv | in between.
- I hope it fits, because the image is 5.82 Gb, while the disk has only 2.7 Gb capacity (2.3 Gb used).
- Strange enough, the docker images from the Default app are not visible.
- In principle the docker buildx should work, --platform is one if its options.
- After sudo apt install docker-buildx the second steps works.
- Strange enough, all my Ubuntu terminals were gone after lunch.
- Uninstalling the 2nd Default app.
- ssh to rae. Now at least the image appears (5 months old), when doing docker image ls.
- Run docker run -it --restart=unless-stopped -v /dev/:/dev/ -v /sys/:/sys/ --privileged --net=host luxonis/rae-ros-robot:humble. Entered a shell inside this image. Could do ros2 topic list, which gives only /parameter_events
/rosout.
- Started another Ubuntu terminal, ssh into rae, created another session with docker exec -it zsh. In that session I did ros2 launch rae_bringup robot.launch.py
- In the original session I could do ros2 topic list again, which now gives:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/lcd
/leds
/odometry/filtered
/parameter_events
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/robot_description
/rosout
/set_pose
/tf
/tf_static
- Did a ros2 topic list in WSL, but only the default two topics are shown. Yet, no ROS_DOMAIN_ID is specified, so both are running domain 0, as explained in this ConstructSim lesson.
- Reboot, and look if I could see it when I use native Ubuntu.
- Could not login, but the Luxonis hub also gave status unknown, and the ring was purple.
- Rebooted the RAE. Now the hub indicates Running again. Could also login with via USB with ssh root@192.168.197.55
- No ros2 on nb-dual native Ubuntu20.04, so went to WS9.
- Started ros2 launch rae_bringup robot.launch.py, which showed a subset of the topics (but it WORKED!)
- At startup, I get a number of errors:
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
component_container-1] [2024-05-24 11:52:18.518] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[mic_node-8] [FATAL] [1716551538.531122684] [mic_node]: Unable to open PCM device: Device or resource busy
[component_container-1] [ERROR] [1716551542.707475672] [rae]: No available devices (3 connected, but in use)
[component_container-1] [INFO] [1716551542.707825674] [rae]: No ip/mxid specified, connecting to the next available device.
- Yet, this time I started with the first run command from step 4, while previous I used the second run command (including the device-cgroup-rule). Still, there are some errors.
- Trying a reboot of RAE. Even after a reboot some devices are not available.
- This is the list of topics visible from WS9:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/odometry/filtered
/parameter_events
/rae/imu/data
/robot_description
/rosout
/set_pose
/tf
/tf_static
- When I do ros2 node list I get:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/odometry/filtered
/parameter_events
/rae/imu/data
/robot_description
/rosout
/set_pose
/tf
/tf_static
- With the GPIO pins, I could try to unexport, as described in this post. Yet, rebooting seems to be enough.
- Note that the ekf-settings, which point to this localisation algorithm
- Also note that with the full stack, I can also activate the ros_bridge which should allow me to communicate with ros1-nodes.
- Playing a sound on RAE works with gst-launch-1.0 playbin uri=file:///home/root/sounds/complete.oga, which I downloaded from freedesktop.
- Adding the sound via a crontab doesn't work, because such directory doesn't exist.
- Used the trick at the end of this post to set the cronjob. Yet, doesn't seem to work
- Doing the same trick via /etc/rc.local as suggested here worked.
- Tried again, but now with simply ros2 launch rae_hw control.launch.py. Yet, I get both:
ERROR] [ros2_control_node-4]: process has died [pid 158, exit code -6, cmd '/opt/ros/humble/lib/controller_manager/ros2_control_node --ros-args --params-file /tmp/launch_params_z1bgybcn --params-file /ws/install/rae_hw/share/rae_hw/config/controller.yaml -r /diff_controller/cmd_vel_unstamped:=cmd_vel'].
[ERROR] [mic_node-8]: process has died [pid 166, exit code -6, cmd '/ws/install/rae_hw/lib/rae_hw/mic_node --ros-args'].
- So teleop doesn't work. I see the lcd and led_node.
- Tried the python code inside the container on the RAE. Still: no module rae_sdk found.
- Inside the container I have python3.10.12
- Still in the container, I did apt install python3-pip
- Yet, installing rae-sdk didn't work.
- Searched for the sdk both on the bare rae and container. Only found /usr/lib/robothub/sdk/python, which contains the default app code.
- Ivo suggested in his post that the rae_sdk should be somewhere in /ws/build, but I do not see it.
- Try next week to build a new docker file, which is not 5 months old.
- Note that in the entrypoint, which is run inside the container, the do echo ${channel} > /sys/class/pwm/pwmchip0/export.
May 23, 2024
- Booted up the RAE with the USB-charging. Starts to drive on one wheel. Reboot.
- Was able to ssh as root into RAE.
- According to RAE python SDK, I should replace the image in the robotapp.toml.
- Yet, there are several robotapp.toml files:
/usr/lib/robothub/builtin-app/robotapp.toml - Dec 22
/home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml - May 22
/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml - May 22
/data/persistent/home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml - May 22
/data/persistent/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml - May 22
- This could be the two Default Agents - both trying to start
- First try to do the suggestions from this discussion.
- Could start python3, but from rae_sdk.robot import Robot (no module rae_sdk)
- Looked into
/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml because that was installed first.
- Can see that is v2.0 of the Default agent, and that the image is already ghcr.io/luxonis/rae-ros:v0.2.2-humble. Same for ae753e9f-version., including th copies at /data/persistant. Only the one in
/usr/lib/robothub/builtin-app/robotapp.toml points to "ghcr.io/luxonis/robothub-app-v2:2023.218.1238-rvc3"
- Note that humble-version also contain som pre-launch commands:
export ROS_DOMAIN_ID=30 . /opt/ros/$ROS_DISTRO/setup.sh . /ws/install/setup.sh
- Checking env | grep ROS gives no response. The Luxonis hub indicates that ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3 is running.
- Checking if I can find those setup.sh. Found them in:
./home/robothub/containers/overlay/7766e7ee54cc4fb418d6658cde54a7af7bca8dd5c6176b29e2a494c5df7c45d0/diff/sai_ros/spectacularai_ros2/install/setup.sh
./home/robothub/containers/overlay/2e309fc781e1adc67486b2d7bcbd209eef185988c3881c075a996ef8df033204/diff/opt/ros/humble/setup.sh
./home/robothub/containers/overlay/293f64214d5d1a98b385cddfe269d9a5ab59e1165bde64270280b844fc07f6a5/diff/ws/install/setup.sh
./home/robothub/containers/overlay/ce5ad1b5e3efd36576fa12656851b3ad762100188603ff5abfa17aead1ce5c5d/diff/opt/ros/humble/setup.sh
- And in the corresponding /data/persistent
- Checked with docker container ls but no containers are running. Also see nothing with docker image ls
- Should start the docker with the commands found rae-ros documentation
-
- Read Chapter 9 and 10 from Dudek and Jenkin. Both Pose maintenance and Mapping point back to quite old algorithms (FastSlam at best). Also the assignments are not updated to ROS.
May 22, 2024
- Used WS9 to do the TurtleBot example from rerun, but without the rerun node.
- Installed sudo apt install ros-humble-navigation2 ros-humble-turtlebot3 ros-humble-turtlebot3-gazebo, which installed 193 new packages.
- After that I did:
source /opt/ros/humble/setup.bash
export TURTLEBOT3_MODEL=waffle
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/opt/ros/humble/share/turtlebot3_gazebo/models
- Followed by ros2 launch nav2_bringup tb3_simulation_launch.py headless:=False.
- Rviz started up, but complained with Global Status error (Frame map doesn't exis) and in the log on a missing transform from base_link to odom.
- Set the Fixed Frame of rviz to base_link, but that gave a map-error.
- Explictly created a transform between base_lint to odom with . Yet, rviz responded with Message Filter dropping message: frame 'odom' at time 0.000 for reason 'discarding message because the queue is full'.
- In Gazebo I see the world, but I don't see a TurtleBot.
- Looked at nav2 getting started.
- Checked /opt/ros/humble/share/turtlebot3_gazebo/models/turtlebot3_waffle, but model is there. Also TURTLEBOT3_MODEL and GAZEBO_MODEL_PATH are OK.
- Looked into the log. Actually, /opt/ros/humble/share/nav2_bringup/worlds/waffle.model is used. Yet, I see that spawn_entity.py fails on missing lxml.
- Installed lxml (v5.2.2) with pip instal lxml, but that gives an empty world in Gazebo:
[spawn_entity.py-3] [INFO] [1716368313.041980781] [spawn_entity]: Loading entity XML from file /opt/ros/humble/share/nav2_bringup/worlds/waffle.model
[ERROR] [gzserver-1]: process has died [pid 290368, exit code 255, cmd 'gzserver -s libgazebo_ros_init.so -s libgazebo_ros_factory.so /opt/ros/humble/share/nav2_bringup/worlds/world_only.model'].
- Tried again with pip install lxml==4.9.4. No success. v3.8.0 could not be build. v4.8.0 was still empty world.
- Looked at Turtlebot quickstart.
- There they recommended sudo apt install ros-humble-gazebo-*, which installed 54 additional packages.
- Everything recommended is installed.
- Switching to simulation page. There they recommend to build the simulation from source. Had to do pip install catkin_pkg and deactive conda, but after that the turtlebot_ws was build. Also the burger in the empty world crashed.
- It seems that pip install lxml==4.6.3 is the latest version that can be installed. Still crashes.
-
- Back to RAE. Recharging helped, the robot started up.
- Went to RobotHub FrontEnd. There I was recommended to use the new FrontEnd at Hub
- There I see one device online (rae), running agent 22.223.1855. The RAE was stopped the default app (v1.2.1 - Nov 2023), which indicated that a new version was available. Didn't want to start. Changed to v2.0.0 - Jan 2024). Seems to want to install to 127.0.0.1, instead of the LAB42 ip. Front-End gives Download in Progress.
- Back to Get Started with RAE. Finishing up is going to the web-interface of the RAE itself https://:9010. I can select the 'Open Frontend', although that gives 'Waiting for stream' (Chrome and Edge browser). At the bottom right I see Session ID 'Disconnected', so it seems that it requires that the Default App is running. Actualy, I now see in the Perception Apps page two Default Apps which are starting, one of them is now running. According to the overview on the Luxonis hub, it is v2.0.0. The battery level was 47%, and when I connected the USB-c to USB-B cable the device is charging and streaming:
- The first assignment could be to calibrate the camera, because the perspective is quite distorted:
- Note that there is not much view of the ground nearby, so it is better to follow the lines at the ceiling than on the ground.
- Luxonis has Calibration guide
May 21, 2024
- Looked again to the RAE SDK python interface, including ROS interface. Not clear when it is updated, although they specify firmware OS 1.14, Agent 23.223.1855 and rae-ros:v0.2.2-humble. That versions are the same as in the discussion of Jan, 27 2024
- The robothub github is now updated to version 2.6.0 (few weeks old).
- The v2.5.0 update seems to be a major update, with support for cv2 for local development.
- In the blogs I see an update (March 18, 2024) that RobotHub has become a stable product (which a new app every month).
- Again (as Feb. 26), I don't see a yellow or blue light while charging.
- Also WLS Ubuntu has a number of frosen terminals, still running the TurtleBot navigation example. Time to go home.
May 17, 2024
- Finished the remainder of Chapter 6. The Q-learning seems less of interest for VAR.
- Looked again at my progress of April 24 with rerun.
- Some new examples were published, so started with git pull in ~/git/rerun at the Ubuntu 22.04 partition of WSL on nb-dual.
- Yet, this fails on fatal: Need to specify how to reconcile divergent branches
- Tried the suggested git config pull.rebase false, but that fails it wants to do a forced update origin/cmc/dense_chunks_1_client_side (and I was not logged in, so this push fails).
- After git config pull.rebase true the command git pull gives:
Successfully rebased and updated refs/heads/latest
- Continued with the ros_node example.
- Gazebo and rviz start up nicely, but nothing is visible in the rerun-screen. Seems that I have change to the latest version (0.16.0), which was released last night.
- Did that with git checkout tags/0.16.0. At that moment I was without branch, so I did git switch -, followed by a git pull.
- The viewer was still 0.15.1 (checked with about in upper-left corner), so looks I have to rebuild.
- For that I need pixi, which can be installed with curl -fsSL https://pixi.sh/install.sh | bash
- After source ~/.bashrc I tried the suggested pixi run py-build --release.
- Yet, this fails. Checked the cargo version, which was not installed.
- Installed cargo (with rust) by curl https://sh.rustup.rs -sSf | sh. Activated it by source ~/.cargo/env.
- Continued with pixi run rerun. That works, the viewer is now version 0.16.0.
- Also tried the suggested pixi run py-build --release. Again 351 packages are build. this ones fails on couldn't read crates/re_web_viewer_server/src/../web_viewer/re_viewer.js: No such file
- Indeed, those files two files are gone. Also cannot find them in the main version.
- Seems that the files are generated. The readme of this crate suggest to run pixi run rerun-build-web. Now those two files are there.
- Still, python3 uses the old viewer. Tried python3 -m pip install rerun-sdk==0.16.0. That was the trick to get it working:
- I am now as far as before (but with a webviewer), because I do not see an image from the Realsense.
- The ros2 launch nav2_bringup tb3_simulation_launch.py headless:=False gives many warnings that the transform from base_link to map doesn't exist.
- Explicitly defined this transform with ros2 run tf2_ros static_transform_publisher 0.1 0 0.2 0 0 0 map base_link. Now I got far more topics, including intel_realsense_r200_depth/image_raw:
/amcl/transition_event
/amcl_pose
/behavior_server/transition_event
/bond
/bt_navigator/transition_event
/clicked_point
/clock
/cmd_vel
/cmd_vel_nav
/cmd_vel_teleop
/controller_server/transition_event
/cost_cloud
/diagnostics
/downsampled_costmap
/downsampled_costmap_updates
/evaluation
/global_costmap/costmap
/global_costmap/costmap_raw
/global_costmap/costmap_updates
/global_costmap/footprint
/global_costmap/global_costmap/transition_event
/global_costmap/published_footprint
/global_costmap/voxel_marked_cloud
/goal_pose
/initialpose
/intel_realsense_r200_depth/camera_info
/intel_realsense_r200_depth/image_raw
/intel_realsense_r200_depth/points
/joint_states
/local_costmap/clearing_endpoints
/local_costmap/costmap
/local_costmap/costmap_raw
/local_costmap/costmap_updates
/local_costmap/footprint
/local_costmap/local_costmap/transition_event
/local_costmap/published_footprint
/local_costmap/voxel_grid
/local_costmap/voxel_marked_cloud
/local_plan
/map
/map_server/transition_event
/map_updates
/marker
/mobile_base/sensors/bumper_pointcloud
/odom
/parameter_events
/particle_cloud
/performance_metrics
/plan
/plan_smoothed
/planner_server/transition_event
/preempt_teleop
/received_global_plan
/robot_description
/rosout
/scan
/smoother_server/transition_event
/speed_limit
/tf
/tf_static
/transformed_global_plan
/velocity_smoother/transition_event
/waypoint_follower/transition_event
/waypoints
- Tried to visualise this simulated image with ros2 run image_view image_view --ros-args -r image:=/intel_realsense_r200_depth/image_raw, but nothing happens. Will try a restart of the simulator. Note that also the transform base_link to odom is missing.
- Also adding that transform, but now I get the warning [rviz]: Message Filter dropping message: frame 'odom' at time 0.000 for reason 'discarding message because the queue is full'
- The transform was complaining on depreciated arguments, so tried the new-style ros2 run tf2_ros static_transform_publisher --frame-id base_link --child-frame-id odom
- Could use some of the image_view alternatives mentioned in this blog.
- One missing transform was from base_footprint to map. After that the TF-tree in rviz2 looked OK, but I see no images with image_view.
- Seems that the CPU is a bit overwelmed, so lets try again doing this native (at home).
May 16, 2024
- Starting to read Chapter 6. The first problem looks very interesting: following a line with a ANN with a single hidden layer, to learn to follow a line (first Gazebo, then real).
- Continue with section 6.7
May 15, 2024
- Read A Standard Rigid Transformation Notation Convention, which not only analyse (including Corke), but is accompanied with github page.The WRT libary has both commandline, Python and C++ bindings.
- The proposed notation is summarized in this table:
- When used, the following sentence can be added:
"
In the following, the orientation of {s} with respect to {b} is denoted by $_b\mathbf{R}_s, and the position of {s} with respect to {b} and expressed in {c} is denoted by $^c_b\mathbf{p}_s$, as defined by the RIGID notation convention [link to this document ]
"
- I can also use \usepackage{rigidnotation}, so that I can us $\Rot{s}{b}$ and $\Pos{s}{b}{c}$ instead.
-
- Read chapter 5 (Vision and Algorithms). A lot is dedicated to 'old' CV, for instance the calibration section points to tools from 2000. The Gazebo exercises look interesting.
May 14, 2024
- Read Chapter 4 (Non-Visual Sensors and Algorithms). Not only Sonar and Lidar, and EKF / Particle SLAM are already covered.
April 24, 2024
- Looking if I can get rerun ros_node working on my Ubuntu 22.04 WSL on nb-dual.
- Rerun requires sudo apt install ros-humble-desktop gazebo ros-humble-navigation2 ros-humble-turtlebot3 ros-humble-turtlebot3-gazebo, so have to see if the visual parts works.
- No version of ros installed on the WSL Ubuntu 22.04, so first had to setup sources, following install instructions of ROS humble.
- Did a clone of rerun github.
- Checking if WSL allows graphical programs following WSL instructions. Gimp works, although the program gives warnings that the theme engine cannot find module_path: "pixmap".
- Running the TurtleBot examples works out of the box:
- Yet, not clear how scans and or observations should be streamed. Checked with ros2 topic list; the following topics are published:
/clicked_point
/clock
/downsampled_costmap
/downsampled_costmap_updates
/global_costmap/costmap
/global_costmap/costmap_updates
/global_costmap/voxel_marked_cloud
/initialpose
/joint_states
/local_costmap/costmap
/local_costmap/costmap_updates
/local_costmap/published_footprint
/local_costmap/voxel_marked_cloud
/local_plan
/map
/map_updates
/mobile_base/sensors/bumper_pointcloud
/parameter_events
/particle_cloud
/plan
/robot_description
/rosout
/scan
/tf
/tf_static
/waypoints
- Looked into python/ros_node/main.py. The ros_node is ready for a depth camera, yet this sensor is not mounted on the simulated TurtleBot3:
self.path_to_frame =
"map": "map",
"map/points": "camera_depth_frame",
"map/robot": "base_footprint",
"map/robot/scan": "base_scan",
"map/robot/camera": "camera_rgb_optical_frame",
"map/robot/camera/points": "camera_depth_frame",
- The depth camera is in principal a intel_realsense_r200_depth.
- Could look into this github repository to add the Intel Realsense to the TurtleBot.
- Yet, the turtlebot3_waffle is not in the ~/.gazebo/models-directory. Looking at the nav2 demo, it seems that the models is loaded from /opt/ros/humble/share/nav2_bringup/worlds/waffle.model
- Note that during nav2 bringup, I receive many warnings that frame "odom" doesn't exist.
- Checked that model. The realsense_depth_camera is already part of that model, which should load the driver libgazebo_ros_camera.so
- Seems that this can be installed sudo apt-get install ros-humble-gazebo-plugins, but that was already implictly done.
- This example shows an Kinect to be used in nav2. The depth camera seems to produce scans, but in the model also a LDS lidar is a mounted sensor, which also seems to produce scans.
April 23, 2024
- Scanned this EDUCON paper, which describes a @Home like course. Many links are given, except to course itself.
April 22, 2024
April 12, 2024
April 11, 2024
- Read the first two chapters from Dudek's 3rd edition. The history chapter feels a bit ancient, but I liked Chapter 2 were 'all' robot problems were demonstrated with a point-robot. Already in Chapter 2 an extensive list of exercise
March 6, 2024
- Looked at the first exercise, drawing a cube on the chessboard. The camera matrix K.txt is already given, so time to look into the calibration tools from Scaramuzza.
March 5, 2024
- Checking the Vision Algorithms for Mobile Robotics slides.
-
- From the 1st lecture - Introduction I really liked the example of Roomba 980's Visual SLAM to speed up the cleaning (slide 38).
- Next to Chapter 4, Szeliski 's book Computer Vision gets the most attention.
- For sure Chapter 11 - Structure from motion and SLAM and onwards are interesting.
- Szeliski's book is also used in Computer Vision 1 and Computer Vision 2.
- In the CV2 Overview an Project Evalualuation Criteria can be found.
- Structure from motion and SLAM is covered in the 3rd lecture of CV2. No explicit reference to Chapter 11 is made, only to the book itself (the 1st edition, 2010).
- Chapter 12 is Depth Estimation, which is not covered by CV2.
- Lecture 4 is covers Chapter 13 - 3D Reconstruction.
- The last chapter of Szeliski is Image-Based rendering, which is also not explicit covered in CV2.
- For this course Chapter 8 - Image Alignment and Stitching could cover BEV images, while Chapter 9 - Motion Estimation seems the most important. No idea if this is covered in CV1.
- Actually, BEV is not covered in Chapter 8. Not mentioned at all in the book.
- The the first lecture - Introduction ends with the learning goal: to build a full visual odometry pipeline (for a MARS rover).
- This article gives an overview of Open SLAM challenges (Table 2), and points back to an overview paper of 2021 and Scaramuzza's overview paper from 2016.
- Finishing the mini project gives a 0.5 grade bonus.
-
- Also checked 2nd lecture - image formation. This corresponds to Chapter 2.1 and 2.3 of Szeliski's book.
- The lab exercise is to implement a augmented reality wireframe cube. Can be done with a calibration board or the floor of our Intelligent Robotics Lab.
-
- Also checked 3rd lecture - camera calibration. This corresponds to Chapter 2.1 of Szeliski's book.
- The lab exercise is to implement a camera motion estimator.
- Camera localisation can be solved with 3 or 4 points. For 3 points the law of cosines is used.
- The 3 point algorithm can be found in OpenCV based on Gao's algorithm.
- Scaramuzza was author of A3P3P (also in OpenCV, which solves the camera pose (but not the distances from the points).
- The lecture ends with unwarping fish-eye and omnidirectional cameras; the bird eye view is also not covered.
- Would be nice to solve this lab exercise in ROS.
-
- Also checked 4th lecture - image filtering. This has not only the classic filter, including Canny edge. It is aslso shows that HED, a CNN based detector from 2015, outperforms Canny both in performance and computation time. Transformer models are now state-of-the-art. This corresponds to Chapter 3.2, 3.3 and 7.2.1 of Szeliski's book.
-
- Also checked 5th lecture - point feature detection and matching. This corresponds to Chapter 7.1 and 9.1 of Szeliski's book and Peter Corke's book (13.3 in 2nd edition - 12.3 in 3rd edition).
- The Lab exercise is to implement the Harris corner detector.
-
- The 6th lecture continues with point feature detection and matching. This corresponds also to Chapter 7.1 of Szeliski's book and Peter Corke's book(13.3 in 2nd edition - 12.3 in 3rd edition).
- The Lab exercise is to implement the SIFT blob detector.
-
- The 7th lecture starts with multi-view geometry. Recovering 3D structure from two images is seen as a simple form of 2-view geometry. With K,T,R known this is stereo vision. This corresponds to Chapter 12 of Szeliski's book and Peter Corke's book(14.3 in 2nd edition - 14.3 in 3rd edition). It would be nice to work with the RAE in a corridor with lines radiating from the epipole, while doing forward motion with the robot.
- The Lab exercise is stereo vision.
-
- The 8th lecture continues with multi-view geometry. Recovering 3D structure from two images while the K,T,R are unknown is seen as a simple form of 2-view structure from motion. This can only work if there are at least 5 correspondences, although that leaves open 10 distinct solutions. With the 8-point algorithm this is solved. This corresponds to Chapter 11.3 of Szeliski's book and Peter Corke's book(14.2 in 2nd edition - 14.2 in 3rd edition). This also means that here we have overlap wiht CV2. Scaramuzza order seems more logical than Szelizki's order.
- The Lab exercise is to implement the 8-point algorithm
-
- The 9th lecture continues with multi-view geometry with batches of images. So recovering 3D structure with more than two images. The redundancy can be used to remove outliers, because before perfect correspondence is assumed. Finally computer vision is applied to robotics, because the assumption is made that not every movement of the camera is possible. For a planar robot planar motion can be assumed (see Motion model of Thrun's book). The lecture ends with SuperGlue, which includes a Deep Front-End and Middle-End! No correspondence with the two textbooks are made.
- The Lab exercise is on Visual Odometry integration and the mini-projects.
-
- The 10th lecture finished multi-view geometry with the whole Visual Odometry pipeline, which combines 2D-to-2D motion estimation (Lecture 8), 3D-to-2D (Lecture 3) and 3D-to-3D (point cloud registration). SLAM algorithms such as SLAM++ are mentioned in the last step, in the Pose-Graph Optimization. Pose-Graph Optimization is less precise, but more efficient. Loop closure will be covered in Lecture 12, to upgrade Visual VO to Visual SLAM. No correspondence with the two textbooks are made.
- The Lab exercise is to combine the P3P algorithm with RANSAC.
-
- The 11nd lecture covers Tracking No correspondence with the two textbooks are made. Instead, they refer to Lucas Kanade 20 Years On: A Unifying Framework
- The Lab exercise is to implement the Kanade-Lucas-Tomasi tracker
-
- The 12th lecture consists of three parts. The first part covers Place Recognition (yet, Bag-of-Visual-Words is described, but not applied to loop-closure). No correspondence with the two textbooks are made. The second part covers Dense 3D Reconstruction (or multi-view stereo), where the Disparity Space Image is explained. This covers Chapter 12.7 of Szelinski's book. The third part starts very general with Deep Learning, but than focus on methods relevant for Computer Vision like NeRF.
-
- The 13th lecture covers Visual Inertial Fusion. The RAE also has an IMU (check - yes, a BMI270 6-axis IMU), so this is relevant for our course. The loosely coupled version of VIO should be easy to implement (although inaccurate and should not be used). The case study ROVIO is EKF based. No correspondence with the two textbooks are made.
- The Lab exercise is Bundle Adjustment.
-
- The last lecture is about Event-Based vision, which is not relevant for our course. The event camera is seen as an IMU on steriods, so they present the combitions of events, images and IMU as Ultimate SLAM?. No correspondence with the two textbooks are made.
- The Lab exercise is the final VO integration.
-
- Lets assume that CV1 coves Szeliski's Chapter 2 and 3 well enough. So, I should check Chapter 7 and beyond.
- Received an overview of CV1 from Shaodi. Not only Chapter 2 and 3 are covered. In the 3rd week motion and optical flow is covered (Chapter 7 and 9). The 4th week covers stitching (section 8.1, 8.2, 6.1 and 6.2). The 5th week section 5.2, 6.3 and 6.4 are covered. The 6th week section 5.3, 5.4, 12.1, 12.2, 12.3 and 13.2 are covered.
- This combines with CV2, which covers Chapter 11 and 13.
- So, only Chapter 10 (Computational photography) and Chapter 14 (Image-based rendering) are skipped.
-
- For Peter Corke's book, I should look at Chapter 12 till 14. On September 18, 2023 I read until Chapter 6. Chapter 7 and 8 are pure robotic. Chapter 9-11 are quite basic, although section 11.7 covers reshaping images, including image pyramids and warping. Bird-Eye view is also not covered in this book.
- Read section 11.7. Exercise 8b from this chapter is nice (write a covert pixel to latitude and longitude and overlay GPS data over a satellite image of your home).
- Another option is to modify exercise 9 (track moving vehicles) to the RAE in the lab (from a ceiling camera).
- Read Chapter 12 until 12.1.1. Nicely indicates that in robotics we can move the robot to a better viewpoint or add a diffucse light source near the camera to reduce effects like specular highlights. Next will be Object Instance Representation.
March 4, 2024
- Looking up the learning goals of the MscAI. In datanose I couldn't find learning trajectories in datanose, but found at least the course objectives from Computer Vision 1 and Computer Vision 2.
- Checked the TER 2023-2024. Objectives are split into two parts. First the general academic ones, after that the exit qualifications. Next to knowledge of the current theories and methods of AI and its subfields, also:
- has the capability to apply this knowledge to analyse, design and develop AI-systems;
- can formulate scientific questions and is able to solve problems with the aid of abstraction and
modelling;
- is able to contribute to further developments of the theories, methods and techniques of AI in a
scientific context;
- is able to express him/herself clearly on a technical/mathematical and general level;
- is aware of the social context and consequences of conducting AI research and work;
- can obtain an academic position at a university or research centre or scientific/applied position in
the industry.
- Looking into the two textbooks. Peter Corcke's book has the objectives in the preface:
- Give a cohesive narrative that covers robotics and computer vision - both seperately and together.
- Show how complex problems can be decomposed and solved.
- Allows the user to work with real problems, not just trivial examples.
- From Peter Corcke's introduction:
- The book will help to fill the gaps between what you learned as undergraduate, and what will be required to underpin you deeper study of robotics and computer vision.
- Another option is to work on the applications and look what knowledge is needed for a solution.
- Another option is to structure the course as done at the Robot Academy.
- From Dudek and Jenkin's book I have for the moment only the 2nd edition. The book is not very explicit on its objectives.
- These are the objectives we proposed to the Master - Upon successful completion, students will have the knowledge and skills to:
- Discuss the history, concepts and key components of robotics technologies.
- Describe and compare various robot sensors and their perception principles that enable a robot to analyze their environment, reason and take appropriate actions toward the given goal.
- Analyze and solve problems in spatial coordinate representation and spatial transformation, robot locomotion, kinematics, motion control, localization and mapping, navigation and path planning.
- Apply and demonstrate the learned knowledge and skills in practical robotics applications.
- Critically appraise current research literature and conduct experiments with state of the art robotic algorithms on a robotic platform.
- Effectively communicate engineering concepts and design decisions using a range of media.
-
- Looked at Zurich's Vision Algorithms for Mobile Robots course.
- It has several exercises, including those on Visual Odometry. It also has an optional Mini project on Visual Odometry.
- Note that this course is also based on Peter Corke's book. In addition, it points to Chapter 4 - Perception of "Autonomous Mobile Robots", by R. Siegwart, I.R. Nourbakhsh, D. Scaramuzza
- In addition, I see Zisserman's Multiple view Geometry, 2003.
- I also see the classic An Invitation to 3D Vision, by Y. Ma, S. Soatto, J. Kosecka, S.S. Sastry (Nov 2001)
- Last but not least Computer Vision: Algorithms and Applications, by Richard Szeliski
-
March 1, 2024
February 29, 2024
- Received the 3rd edition of Computational Principles of Mobile Robotics.
- In the preface they recommend to concentrate on Chapter 6-10. They are especially put focus on the Deep Learning chapter.
- They even have ROS2 compatible material with the book (Instructor resource).
- The book comes with exercises. The 2nd chapter directly has SLAM for a point-robot, including drive_to_goal.py code.
- For a high-level introduction to Deep Learning (in Chapter 6), they point to the book of Goodfellow, Bengio and Courville. As short introduction, they point to IEEE Spectrum article How Deep Learning works, which is mainly a visualisation of the backpropagation steps:
- Not much to find on the personal pages of Gregory Dudek and Michael R. M. Jenkin (not teaching this term)
- The last mobile robotics class on Mobile Robotics listed was Winter 2013
- Still, I like the level of the book.
February 28, 2024
- Looking into older forum post. This post was working on a test Perception App with python. The recommendation was to use the RAE Starter template (not the regular template) and to remove the RAE Default app.
- This post was working with the gazebo environment. The mentioned fix is 3 weeks ago integrated in the main Humble branch.
- In this post post they found that audio_spectrum.py example just demos the LCD screen, and that actual sound can be played via the ROS speakers_node.
- In this post Ivo was able to visualize the videostreams with rviz.
- Finally found the post I was looking for, with the charging check via cat /sys/class/power_supply/bq27441-0/status
- Executing this command just gave the answer Charging. capacity_level gave Normal. charge_now gave 886000, while charge_full gave 1510000, so that would mean that the battery level is now 58%.
- The frontend gave a battery level of 96%, so there is some mismatch there. Positive news is that I recieved a video stream!
- Could even control the motors with keyboard!
- Switched the RAE off by pushing the power button for 12 seconds (instead of the 8 seconds mentioned in the forum).
February 27, 2024
- According to a post of last Sunday, Luxonis is working on a RAE python SDK.
- This post describes more details to how to get the RAE python SDK working.
- This post reports that the battery drains in 34 minutes and that the Wifi has only 10% of the normal Wifi bandwidth (Luxonis is working on it).
- Found a factory reset procedure in this post.
- At the end they point to this post, which gives flashtool instructions.
- Yet, one is not specific how to install the flashtool, and the troubleshoot link is gone (also not on Wayback machine). I found the firmware page.
-
- Did a reboot by pushing the power and reset button at the same time. With dmesg | tail I see that there is a connection:
[499580.376346] usb 1-2: new high-speed USB device number 11 using xhci_hcd
[499580.524873] usb 1-2: New USB device found, idVendor=8087, idProduct=0b39, bcdDevice= 0.01
[499580.524876] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[499580.524877] usb 1-2: Product: Intel Movidius Keem Bay 3xxx
[499580.524877] usb 1-2: Manufacturer: Intel Corp.
- Nothing on the display. Didn't continue directly, not sure if the connection stayed (dmesg complains a lot on snap.firefox).
- At least ping 192.168.197.55 gives no response.
- Also ssh root@192.168.197.55 gives no response.
- The LED2 is also no longer blue, only the PGOOD, STATUS and BB-SYS.
-
- Found the flashtool via this post, which points to this document, which indicates to do python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ flashtool
- Checked artifactory, v2.9.0 is the latest version (Aug 2023).
Download the FIP and OS (luxonisos-1.14-rae.zip). Because 4 *.img are needed, I propably have to unzip it first. That is correct.
- After updating the udevadmin rules, the command flashtool flashall -d usb fip-dm3370-r5m2e5-1.8.1.bin data.img boot.img system.img syshash.img gives the output:
/home/arnoud/Install/Luxonis/fip-dm3370-r5m2e5-1.8.1.bin is a valid FIP
/home/arnoud/Install/Luxonis/data.img is a valid data image
/home/arnoud/Install/Luxonis/boot.img is a valid boot image
/home/arnoud/Install/Luxonis/system.img is a valid system image
/home/arnoud/Install/Luxonis/syshash.img is a valid verity image
No USB devices found. Please check your connection and ensure the device in recovery mode
That is strange, because dmesg is still OK, LED2 is still on.
- The USB-device also pops up when queried with lsusb:
Bus 001 Device 015: ID 8087:0b39 Intel Corp. Intel Movidius Keem Bay 3xxx
- Yet, when flashtool is run with in verbose (-v) I get:
cmd:./x86_64/fastboot devices -l, stdout:
matches []
USB device list: []
- Moved back to nb-dual (native Ubuntu 20.04), installed the flashtool and the luxonisos images.
- Asked a second hand to put the system in recovery mode. Now the flashtool ask for the USB to update the udevadm. After that, the flash starts:
IP flashed successfully for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 created GPT in 0.0160s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed data in 26.9830s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed boot_a in 16.9080s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed boot_b in 16.7980s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_a in 211.7190s
Successfully flashed device with MX ID: 58927016838860C5 on USB port: 3-4
- Note that the script is sensitive for KeyboardInterrupts, CTR-C seems to be ignored.
- Don't seem to make progress. Trying again.
- Now it fails after a short while:
FIP flashed successfully for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 created GPT in 0.0150s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed data in 28.2880s
- Tried again, this time (3x) it works until the end:
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_a in 208.3450s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_b in 311.7550s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed syshash_a in 5.6530s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed syshash_b in 5.5260s
OS flashing done for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5. Please shutdown the system, set the boot switch to normal and reboot the system
Successfully flashed device with MX ID: 58927016838860C5 on USB port: 3-4
- Could also do ssh root@192.168.197.55. Note that lsusb gives no output, but dmesg now gives:
[ 7653.413423] usb 3-8: New USB device found, idVendor=1d6b, idProduct=0103, bcdDevice= 0.01
[ 7653.413433] usb 3-8: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 7653.413436] usb 3-8: Product: USB Ethernet
[ 7653.413439] usb 3-8: Manufacturer: Luxonis
[ 7653.413441] usb 3-8: SerialNumber: t00000002
[ 7653.436996] cdc_ncm 3-8:1.0: MAC-Address: 22:9c:a8:15:ad:c9
[ 7653.437369] cdc_ncm 3-8:1.0 usb0: register 'cdc_ncm' at usb-0000:00:14.0-8, CDC NCM, 22:9c:a8:15:ad:c9
[ 7654.037948] cdc_ncm 3-8:1.0 enx229ca815adc9: renamed from usb0
[ 7654.146394] IPv6: ADDRCONF(NETDEV_CHANGE): enx229ca815adc9: link becomes ready
- The RAE logo showed on the little screen, followed with the register at robothub screen. Pointed the RAE to our wifi. Screen gives a message ´Connected to Robothub'.
- After a short while the robot also appears in the list of RobotHub robots, and its ip can be seen:
-
- Note that Francisco Martin Rico is planning to update his book Robot Programming with ROS2.
-
- The documentation of robothub is also gone. The github is still there. This branch is updated last week, the feature/local_development branch was updated an hour ago.
-
- Next thing to do is to test the RAE python SDK.
- According to documentation, I should have Luxonis OS 1.14 (check) and Agent version 23.223.1855 (checked in RobotHub Overview).
- Next would be the correct image in the robotapp.toml. Logged in at the RAE, searched for a robotapp.toml in /home/robothub/data. This toml-file uses as runtime.runs_on ghrc.io/luxonis/robothub-app-v2:2023.218.1238-rvc3. The SDK mentions ghrc.io/luxonis/rae-ros:v0.2.2-humble.
- Ivo German mentioned in his post the StarterApp, with runs on luxonis/rae-ros-robot:dai_ros_py_base.
- In the mean time I am kicked out of the ssh-session and I see the RAE screen coming and going. Is the robot busy with an update?
- No App is assigned to the RAE robot. When I use the Install App, I have the choice between four Luxonis Apps: RAE Default, Streams, Emotion Recognition, Car counting App.
- Selected the RAE - Default app. From this app I have a choice between version 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 2.0.0.
- Because I am not connected, the App is not updated. The robot seems to reboot constantly. Already low on power?
- Disconnected the USB, which gives a constant RAE screen. In RobotHub it makes a connection, and starts to update the Agent version (without asking) from 23.233.1855 to 24.031.1223. RobotHub indicates that the upgrade is at 100% (that it is waiting for the device to connect, that the RAE is online, and that the upgrad takes longer than usual. Again RAE vanished a moment from the screen (reboot?). The overview indicated that the Install failed, and the agent version is still 23.223.1855. There is a upgrade button (OTA - over the air?)
- Looking for details the error is that the App is not build. RobotHub shows again that an Upgrade is in progress, the blue led is blinking, no connection to the wireless ip is possible.
- Changed the connected to the left USB-C directly (because Windows complained that the RAE required to much power).
- Now the RAE is visible again in RobotHub. Deselected the Default App 2.0.0 and tried version 1.2.1. Current status is "Download in progress" (Perception App - tab). Finally, it becomes Initializing and Running:
- Tried to run the front-face of the App. The Stream is connecting. According to the icon on the bottom the battery is 0% an the internet 10 Mb/s.
-
- From this post it seems that on Jan 25 Ivo was more or less at the same point as I am now.
February 26, 2024
- No yellow or blue light while charging with a USB-C.
- A simple ping 192.168.197.55 also gives no response.
-
- The black ground plate is epoxed to the white cover. With a scalpel we were able to lift the corners:
- You need a torch screwdriver for the four screws at the corner. We disconnected both connectors (battery and charging pad), which gives us full access to the inside:
- When applying the sequence of disconnecting, connecting, boot, power the green BB-Sys lights up:
- After a while the blue LED2 starts to blink, gets solid, whereafter all LEDs go out again:
- When connecting the USB-side to my laptop, the PGOOD led lights up green, together with an orange status LED:
- When only connected to USB, the orange status LED is blinking. No difference if I use the Dell adapter ring or USB-C to B connector, nor if I use the left USB.
- Could try if I have another 3.7V battery, to check if this one is broken.
-
- The battery has type JV8550105, which is a 3.7 V Battery with 5000mAh with a Molex-3 NTC connector.
- Couldn't find this type directly, this type has 3 connectors, as flat, but broader and shorter.
- The required width is 48mm, the length is 95mm without connectors, 105mm with connectors. Tickness is 8.5mm
- So the type numbers the first two indicates the thickness, the next two the width, the last three the length.
- Found such battery here, although the FTN-P8550105 has a higher capacity (6000 mAh)
February 23, 2024
- Formulated a question for the forum. As answer I got a picture of a open RAE. The screws should be in the corners, but are quite well hidden.
- I have the feeling that the screws are in the white cover, and that the black bottom plate is clipped.
- The LED at the side is now yellow, instead of blue. Blue should be on, yellow off but charging.
February 22, 2024
- Trying to startup the RAE, but still no response. The only response is a small blue light when I push the power button.
- Should try to charge via the charging pad, instead of the USB-C. Next option would be a factory reset, as described here.
- Trying to charge it via the charging PODs for a hour.
- Seems that I can check if it is charging by cat /sys/class/power_supply/bq27441-0/status (once I am logged in via ssh), according to this post
- Should also try the double press to switch it off, before starting it up again.
February 20, 2024
January 26, 2024
January 25, 2024
- For the shopping-list: RAE.
January 2, 2024
Previous Labbooks