To be done:
- Implement the book exercises in ROS2, which runs on RAE
September 19, 2024
- Read Run to the Source, which advocate the use of containers.
- They gave three examples:
September 17, 2024
- I tried the failed docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble direcly on the RAE again, to see if some of the downloads were cached.
- Seems that already half of the pulls are complete.
- Third time it worked. Started the new docker.
- What is new is that the prompt looks different, it isn't that obvious anymore that you are in the container.
- Next new feature is that the LED becomes now green, and the lcd shows RAE ready. Yet, strange enough I can no longer login to the RAE. Did it change its ip?
- Also a ros2 topic list in my ros_humble_env doesn't show any additional topic.
- The only error that I see with the bringup, is a firmware crash:
[component_container-1] [2024-09-17 08:37:35.106] [warning] Firmware crashed but the environment variable DEPTHAI_CRASHDUMP is not set, the crash dump will not be saved.
[component_container-1] [ERROR] [1726562255.149777561] [rae_container]: Component constructor threw an exception: _Map_base::at
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'rae' of type 'depthai_ros_driver::Camera' in container '/rae_container': Component constructor threw an exception: _Map_base::at
- Couldn't find in my labbook that I did any firmware update, so maybe I should.
- Performed the firmware update and rebooted. Installed inside the container net-tools, which shows that the wifi is still correctly configured to LAB42.
- Note that with two containers, the RAE is now for 90\% full.
- Note that in the new container, zsh is no longer installed. bash as shell works.
- Now I get a extensive ros2 topic list both on the RAE-container and in my ros_humble_env.
- I see the websocket of foxglove launched, but connecting to the ws from Chrome fails (although I allowed to execute unsafe scripts):
Check that the WebSocket server at ws://192.168.197.55:8765 is reachable and supports protocol version foxglove.websocket.v1.
- Yet, rviz2 works, displaying both /rae/right/image_raw and /rae/stereo_back/image_raw.
- The error model fails. The Transforms are OK, but for chassis I get Could not load mesh resource 'package://rae_description/urdf/models/RAE-TOP-ASSY_0417.STL'.
-
- Trying to solve this by installing rae-description package also on the laptop side.
- Made a ros_humble_ws, did a git clone https://github.com/luxonis/rae-ros.git in ros_humble_ws/src, followed by a colcon build --symlink-install.
- Fails on a missing package, so did mamba install ros-humble-camera-info-manager. That helps, rae_description can be build. Next missing package is depthai_bridge. Continue with mamba install ros-humble-depthai-bridge. Didn't work. Followed the instructions from here, and did mamba install ros-humble-depthai-ros. Also not available on RoboStack, so time to install it from source.
- Starting with mamba install libopencv-dev (doesn't exist). Same for mamba install python3-rosdep
- Instead did git clone --recursive https://github.com/luxonis/depthai-core.git --branch main in my ros_humble_ws/src.
- That helps. Still an easy_install warning on rae_sdk, and a missing libgpiod-dev for rae_hw.
- Performing mamba install libgpiod-dev also failed, so did sudo apt install libgpiod-dev (outside the ros_humble_env).
- Still the gpoid library is not found. Instead, tried mamba install libgpiod (which exists). That works, now we are back to missing depthai_bridge.
- That can be found in depthai-ros (before I installed depthai-core)
- After also cloning git clone https://github.com/luxonis/depthai-ros.git, the next missing package is mamba install ros-humble-vision-msgs
- Added a COLCON_IGNORE to src/rae-ros/rae_hw. Now depthai_bridge fails on missing ffmpeg_image_transport_msgs. Did mamba install ros-humble-ffmpeg-image-transport-msgs. That fails. Trying instead mamba install ros-humble-image-transport-msgs. Also fails. Trying mamba install ros-humble-misc-utilities, because it is part of ros-misc-utilities package.
- Instead did a git clone https://github.com/ros-misc-utilities/ffmpeg_image_transport_msgs.git --branch=humble, and the bridge finishes.
- The depthai_examples couldn't be build (looks like a opencv dependence).
- The laptop was out of disk-space, so rebooted and made some space.
- Added a COLCON_IGNORE in ~/ros_humble_ws/src/depthai-ros/depthai_examples
- Also did touch depthai-ros/depthai_ros_driver/COLCON_IGNORE
- Now rae_camera fails on missing rtabmap_slam. Did mamba install ros-humble-rtabmap-ros. That should work, but doesn't, so touch rae-ros/rae_camera/COLCON_IGNORE
- Now the RobotModel is correctly displayed in RVIZ2:
- Note that a sudo apt upgrade updated fox-glove, so maybe it works now.
- When using the right ip-address (and setting the Teleop general top to , the RAE drives.
- The Image panel doesn't work, because there seems to be no calibration (yet, I see the camera_info in the topic list).
- Also at the RAE inside the container, there is only one image msg (without echo). Maybe running the foxglove bridge is too much, or running rtab-map. At least, the bringup gives a warning CPU usage is 99.3%.
-
- Looked into /ws/src/rae-ros/rae_bringup/launch: there are 4 launch files there:
-rw-r--r-- 1 root 900 Mar 7 2024 robot.launch.py
-rw-r--r-- 1 root 2400 Mar 7 2024 bringup.launch.py
-rw-r--r-- 1 root 3835 Mar 7 2024 slam.launch.py
-rw-r--r-- 1 root 5243 Mar 7 2024 rtabmap.launch.py
- From those, robot.launch.py is the smallest.
- In ws/src/rae-ros/rae_camera/launch there are two launch files, actual bigger than robot.launch.py (but smaller than bringup.launch.py.
- In /ws/src/rae-ros/rae_hw/launch there are also two (big) files:
-rw-r--r-- 1 root 5183 Mar 7 2024 control.launch.py
-rw-r--r-- 1 root 3271 Mar 7 2024 control_mock.launch.py
- This launch is descriped in rae-ros Testing motors
- In the rae-ros LED node, peripherals.launch.py is mentioned. That launch-file is no longer there.
- Tried to inspect control.launch.py
, but first had to install vim. Note that 543 packages are ready to be updated (many ros-related).
- Looked at the LCD node examples, and tried python3 battery_status.py. No warnings, but also no update. That seems correct, because it subscribes to battery_status updates, so that node should be launched first.
- Did a ros2 node list, and three nodes are running (including the battery status):
/battery_status_node
/rviz
/transform_listener_impl_559ab7a2c310
- Note that rviz is running on my laptop, not the RAE. ros2 topic echo /battery_status gives no updates, so looking how to restart a node.
- There is ros2 kill, so looked with ros2 lifecycle nodes which gave no managed nodes. Read concept page (2015), which was not explicit how to make your node managed.
- According to this lifecycle demo, the trick is to inherit from LifecycleNode instead of just Node.
- Also python3 led_test.py doesn't give a lot of response. Lets reboot and try again.
-
- After the reboot, the script led_test.py still gives no output. A ros2 node list give only car_demo_node.
- Looked into robot.launch.py. It starts two scripts: rae_camera/rae_camera.lunch.py and rae_hw/control.launch.py. The last one with three arguments: run_container': 'false', 'enable_battery_status': 'true', 'enable_localization': 'true'. The last one can be false, the first one should be checked. The rae_camera.launch.py includes the reset of the pwm, which is a bit strange because that is control related.
- Looked into control. Nice to see that at least the led-node has lifecycle control.
- Looked at the latest , which the first (and the last) included lifecycle control. Copied the file from /ws/src/rae-ros/rae_hw/launch to /ws/install/rae_hw/share/rae_hw/launch/. Launched ros2 launch rae_hw peripherals.launch.py. That launches several nodes, only fails at the led-node:
INFO] [launch]: All log files can be found below /root/.ros/log/2024-09-17-15-31-59-117344-rae-7-713
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [725]
[INFO] [speakers_node-2]: process started with pid [727]
[INFO] [mic_node-3]: process started with pid [729]
[component_container-1] [ERROR] [1726587120.573542903] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'battery_node' of type 'rae_hw::BatteryNode' in container 'rae_container': Could not find requested resource in ament index
[component_container-1] [ERROR] [1726587120.587835932] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'lcd_node' of type 'rae_hw::LCDNode' in container 'rae_container': Could not find requested resource in ament index
[component_container-1] [ERROR] [1726587120.597513229] [rae_container]: Could not find requested resource in ament index
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'led_node' of type 'rae_hw::LEDNode' in container 'rae_container': Could not find requested resource in ament index
- After this partly failed launch I have x nodes:
/car_demo_node
/launch_ros_713
/mic_node
/rae_container
/speakers_node
- The command ros2 lifecycle nodes gives now two nodes:
/mic_node
/speakers_node
- Looked at the mic_node. Initially ros2 lifecycle get /mic_node:
unconfigured [1]
- Next ros2 lifecycle set /mic_node configure gives:
Transitioning successful
- That is confirmed with ros2 lifecycle get /mic_node:
inactive [2]
- With ros2 topic list is see from the /mic_node only:
/mic_node/transition_event
- Also ros2 lifecycle list /mic_node is interesting, showing the next possible states. Unfortunelly, after shutdown there is no restart. Yet, after shutdown the mic_node is still in the ros2 node list
- Looked in the code of the node. on_shutdown() calls cleanup(), which should give a snd_pcm_close(). Yet, the speakers were not active, so maybe thats why I didn't hear it.
- My commands could be seen in the log of the launch:
[mic_node-3] [INFO] [1726587937.341837139] [mic_node]: Mic node configured!
[mic_node-3] [INFO] [1726588300.750866607] [mic_node]: Mic node activated
[mic_node-3] [INFO] [1726588402.740562268] [mic_node]: Mic node shuttind down!
- Looked at the peripherals.launch.py; the mic-node is launched as seperate LifecycleNode, the battery is a ComposableNode inside a container, which seems to be a plugin (which fails)
- Looked in control.launch.py. The variable run_container is read, but not used. The battery-node and the others are run as LifeCycleNode.
- Modified the peripherals.launch.py script so that it launches the battery-node and led-node in the same way as control (leaving lcd-node for the moment failing inside the container). Still, the led_test.py does nothing, while the battery_status script gives an error:
File "/ws/src/rae-ros/rae_bringup/scripts/battery_status.py", line 98, in listener_callback
led_msg.data = [ColorRGBA(r=0.0, g=0.0, b=0.0, a=0.0)]*40
File "/ws/install/rae_msgs/local/lib/python3.10/dist-packages/rae_msgs/msg/_led_control.py", line 223, in data
assert \
AssertionError: The 'data' field must be a set or sequence and each value of type 'ColorPeriod'
- Yet, the battery-node showed an additional message:
[battery_node-5] [INFO] [1726590364.803195209] [battery_node]: Battery node configured!
[battery_node-5] [INFO] [1726590365.306569899] [battery_node]: Power supply status changed to [Charging] after 0 h 0 min 0 secs.
[battery_node-5] [INFO] [1726590372.591528110] [battery_node]: Battery node activated!
- With the sys_info node active, I see now the topic /cpu (float). ros2 topic echo /cpu gives:
data: 8.699999809265137
---
data: 1.2999999523162842
---
data: 1.0
---
data: 0.699999988079071
---
data: 1.7000000476837158
---
data: 1.2999999523162842
---
data: 1.0
---
data: 1.2999999523162842
---
data: 1.200000047683715
- That is a number that fluctates a lot. I also see /mem, /disk and /net_up.
ros2 topic echo /mem gives more consistent values:
--
data: 46.29999923706055
- Time to go.
September 16, 2024
- Adding the download instructions of the docker-image to the User Manual.
- On my WSL Ubuntu 22.04 the docker installation instructions didn't work.
- On my native Ubuntu 20.04 partition I had a working docker. The command docker pull luxonis/rae-ros-robot:humble downloaded an image of 5.82GB, which was 9 months old.
- Updating the Firmware is documented here. Did a check with mender --version, which indicated:
2.1.2
runtime: go1.14.7
- So, it seems that the latest version of the firmware is already installed.
- Finding the command to download the actual latest version (6 months old) I found the container page
- Deleted some old benchbot images to gain space. Some images (lab42, benchbot/simulator:sim_omni) could not be deleted because a container was dependent on it.
- Executed docker rm -v $(docker ps --filter status=exited -q) , which allowed to remove also lab42 and benchbot/simulator:sim_omni). The remaining old images are quite small.
- Uploading to the RAE gave a client_loop: send disconnect: Broken pipe, so instead I did docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble direcly on the RAE. Kick out again, but just tried again. Download doesn't make much progress, so maybe the disk is full. Or is it better to do it via the UBS-C cable. At least some downloads are complete. Maybe a good moment to go home and let the download do its thing. Most of the download was finished, when I got an unexpecte EOF.
- Running docker save ghcr.io/luxonis/rae-ros:v0.4.0-humble failed (refernce doesn't exist), giving the IMAGE_ID seems to work.
September 13, 2024
- Looked into Jenkin's ROS foxy introduction, although from slide 22 it also covers ROS2, from slide 38 it introduces OpenCV (2 slides only).
- Looks like that the code for the students and instructors is the same. At least the code is from Jan, 18. The ROS2 code is ros-humble based.
- Their lab0 / Tutorial / apb is on driving a block-robot around with teleop in gazebo.
- Chapter 2 does the same, but this time with navigation (at least, goal position are given)
- Chapter 4 adds non-visual sensors (bumper, imu, lidar, sonar)
- Chapter 5 is the one which was usefull for our first assignment.
- Chapter 6 contains drive by line code (for a car)
- Chapter 7 is driving around based on a lidar map.
- Chapter 8 has no readme, but launches a camera-view. Note that Chapter 8 is about system control.
- Chapter 11 is about multiple robots.
- Chapter 12 is about HCI, the code is about using a joystick with ROS2.
September 11, 2024
- Looked into Dudek's book for ROS2 assignments on calibration or line-following.
- Not in the index. Chapter 6 covers Pose Maintenance, but only has Open Questions about landmarks.
- Camera calibration is covered in section 5.1.3.
- Problem 5.10.2 is about line-structures, and should have example code.
- Problem 5.10.3 is about camera calibration.
- Read the tutorial that comes with the code. There code is ROS2 Foxy based, although there are also ROS1 Noetic snippets.
- They start with a Hello World in ROS1, before moving to ROS2.
- The ROS2 code is in ros2_ws workspace, which for ch5 has the 6 example-codes: aruco_target.py good_features.py view_camera.py
canny_edges.py harris_corners.py opencv_camera.py
- Tried running python3 opencv_camera.py, but nothing happens (also no warning / errors. Looks like that it just publishes the webcam on topic '/mycamera/image_raw'. That is right, it can be displayed with ros2 run image_view image_view --ros-args -r image:=/mycamera/image_raw (inside the ros_humble_env). It should be combined with python3 view_camera.py script, provided by Dudek.
- Actually, this combination would be a nice Vision Hello World.
- Made a view_rae_camera.py version, but again nothing happens. A bit strange, because the RAE launch seems succesful, but only a subset of the topics is visble:
/audio_in
/imu/data
/lcd
/mycamera/image_raw
/parameter_events
/rae/imu/data
/rae/right/image_raw/compressed
/rosout
/tf
- Rebooting RAE-3 and try again. Strange enough, receive a battery warning (5% remaining, while the RAE is charing via USB).
- Lets try RAE-1 for the moment. Same problem. Now all topics are visible, but they are gone after a few minutes.
- Was afraid that it was a memory problem. With docker ps -a we saw all container instances which were exited (at least 20). Checked with df -k. The /data was 92% full. After deleting the old containers with docker rm -v $(docker ps --filter status=exited -q) there is the data is 62% full.
- Yet, the docker still crashes. Seems to be a power problem, so my laptop 45W charger to power the system up.
- Positive thing was that my view_rae_camera.py script showed one image (where after the docker crashed again).
- Note that on May 31 I changed the robotapp.toml to v0.4.0. Yet, in the root shell this app is not active (no ROS environment, no ros2 binary).
- Looked at Luxonis Hub. The app was stopped, but when started it directly stops again.
- The commands docker start `docker ps -q -l` and docker attach `docker ps -q -l` are quite handy, because it restarts an exited container, and bring you back into the container. Tried robothub-ctl stop, but that also stops the wifi (as Joey mentioned).
- Inside the container, there are may ros-packages available in /opt/ros/humble/ros2, but the rae-packages can be found at /ws/install/.
- The command ros2 pkg executables shows all executables, including:
camera_calibration cameracalibrator
camera_calibration cameracheck
camera_calibration_parsers convert
depthai_examples feature_tracker
depthai_examples yolov4_spatial_node
rae_bringup battery_status.py
rae_camera camera
- Executed the last one with ros2 run rae_camera camera, which gave the warning:s
[2024-09-11 12:40:58.062] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
- This executable is publishing slightly different topics:
color/camera_info
/color/image
/color/image/compressed
/color/image/compressedDepth
/color/image/theora
/parameter_events
/rae/right_front/camera_info
/rae/right_front/image_raw
/rae/right_front/image_raw/compressed
/rae/right_front/image_raw/compressedDepth
/rae/right_front/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
- The /rae/right_front/image_raw could be displayed, although with a green band on top (and crashed, while charged with 45W).
- Looked into the docker run command, but -v /dev/bus/usb:/dev/bus/usb was already part of the call.
- Strange enough, one terminal shows a running camera nodelet, while in another terminal attached to the still active docker, shows only the default topics.
- Looked into /ws/src/rae-ros/rae_camera/launch, there is also a rae_camera.launch.py and perception_ipc.launch.py. The second calls perception_ipc_rtabmap. The first makes use of the depthai_ros_driver::Camera
- The output of the rae_camera.launch.py is:
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [389]
[component_container-1] [INFO] [1726060174.802235668] [rae_container]: Load Library: /underlay_ws/install/depthai_ros_driver/lib/libdepthai_ros_driver.so
[component_container-1] [INFO] [1726060175.297192658] [rae_container]: Found class: rclcpp_components::NodeFactoryTemplate
[component_container-1] [INFO] [1726060175.297394788] [rae_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate
[component_container-1] [INFO] [1726060175.322210378] [rae]: No ip/mxid specified, connecting to the next available device.
[component_container-1] [2024-09-11 13:09:35.326] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[component_container-1] [INFO] [1726060179.524265695] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[component_container-1] [INFO] [1726060179.524489991] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[component_container-1] [INFO] [1726060179.555126289] [rae]: Device type: RAE
[component_container-1] [INFO] [1726060179.822587784] [rae]: Pipeline type: rae
[component_container-1] [INFO] [1726060180.740068727] [rae]: Finished setting up pipeline.
[component_container-1] [INFO] [1726060181.285472958] [rae]: Camera ready!
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/rae' in container '/rae_container'
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-2]: process started with pid [418]
[INFO] [busybox devmem 0x20320180 32 0x00000000-2]: process has finished cleanly [pid 418]
- The topic list is again:
/diagnostics
/parameter_events
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/rosout
- Look in the container on the processes running, I only say component_cont.. using 12.6% of the CPU and 8.1% of the memory.
- Installed bsdmainutils, which gave me /usr/bin/column.
- The command paste <(cat /sys/class/thermal/thermal_zone*/type) <(cat /sys/class/thermal/thermal_zone*/temp) | column -s $'\t' -t | sed 's/\(.\)..$/.\1°C/' gave the temperature of the different zones:
mss 44.5°C
css 47.0°C
nce 45.6°C
soc 47.0°C
bq27441-0 48.7°C
iwlwifi_1 45.0°C
- rae-3 works without problem for Joey (with his face-recognition program, so switching back. Not that the container is 522 packages behind. Maybe update and do a commit?
- Tried first ros2 run rae_bringup battery_status.py, which gives no output.
September 10, 2024
- Started up my RAE.
- Looked up the ip-number in RoboHub (RAE is running agent version 23.223.1855).
- Logged into RAE.
- Created a ~/bin/start_docker.sh.
- Inside the container, I specified ROS_DOMAIN_ID=7, to guarantee that the Coach7 laptop was coupled to the right robot.
- Looked with ros2 topic list which topics were published.
- Did the same at the Coach7 side (after specifing the ROS_DOMAIN_ID=7 there):
/audio_in
/battery_status
/diagnostics
/diff_controller/odom
/diff_controller/transition_event
/dynamic_joint_states
/imu/data
/joint_state_broadcaster/transition_event
/joint_states
/map
/map_metadata
/odometry/filtered
/parameter_events
/pose
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/robot_description
/rosout
/scan
/set_pose
/slam_toolbox/feedback
/slam_toolbox/graph_visualization
/slam_toolbox/scan_visualization
/slam_toolbox/update
/tf
/tf_static
- Had to install sudo apt install ros-humble-image-view. After that ros2 run image_view image_view --ros-args -r image:=/rae/stereo_front/image_raw worked for the raw images, which are indeed published with a low framerate.
- Also tried ros2 run image_view stereo_view --ros-args -r stereo:=/rae/stereo_front/image_raw, which gave:
[INFO] [1725956552.527957143] [stereo_view_node]: Subscribing to:
* /stereo/left/image
* /stereo/right/image
* /stereo/disparity
[WARN] [1725956552.528719471] [stereo_view_node]: defaults topics '/stereo/xxx' have not been remapped! Example command-line usage:
$ ros2 run image_view stereo_view --ros-args -r /stereo/left/image:=/narrow_stereo/left/color_raw -r /stereo/right/image:=/narrow_stereo/right/color_raw -r /stereo/disparity:=/narrow_stereo/disparity
[WARN] [1725956567.528621568] [stereo_view_node]: [stereo_view] Low number of synchronized left/right/disparity triplets received.
Left images received: 0 (topic '/stereo/left/image')
Right images received: 0 (topic '/stereo/right/image')
Disparity images received: 0 (topic '/stereo/disparity')
Synchronized triplets: 0
Possible issues:
* stereo_image_proc is not running.
Does `ros2 node info stereo_view_node` show any connections?
- Yet, it is a bit unneccessary to generate with a stereo_image_proc to generate the disparity, when it is already generated by the RAE itself.
- For the compressed images I installed ros-humble-image-transport-plugins on both sides, but still the raw transport is used.
- Yet, the RAE container is quite old, so I did an update there (also installed apt-util to remove a apt-warning).
- Yet, even after this update, ros2 run image_view image_view --ros-args -r image:=/rae/right/image_raw/compressed -p image_transport:=compressed fails to give results.
- Looked at image_transport_tutorials, and ros2 run image_transport list_transports gives:
Declared transports:
image_transport/compressed
image_transport/compressedDepth
image_transport/raw
image_transport/theora
- Yet, in rviz2 the compressed image can be displayed, so this is a something specific for image_view.
- Gave coach7 to Qi Bi.
-
- Continue with ros-dual, working on the Ubuntu 20.04 partition.
- logged in to rae.
- Edited /etc/hostname from 'rae' to 'rae-7'. Should be visible after reboot.
- Checked the docker I am running on the RAE robot. Is an image from 9 months ago, 5.82 GB.
- Checked the version of the humble containers. v0.0.0 is published 9 months ago. I did a download at October 18, 2023.
- Miniforge was already installed, so did mamba activate ros_humble_env.
- Looked with conda config --env --get channels, which gave two channels:
--add channels 'conda-forge' # lowest priority
--add channels 'robostack-staging' # highest priority
- Installed without problems mamba install ros-humble-desktop
- Continued with mamba install compilers cmake pkg-config make ninja colcon-common-extensions catkin_tools rosdep and mamba install ros-humble-image-transport-plugins
- Also mamba install ros-humble-image-view.
- In the container, I made /etc/ros/scripts/launch_robot.sh, which executes ros2 launch rae_bringup robot.launch.py. Yet, the container crashed, count find the script back.
- So, in the ros_humble_env, I could do ros2 topic list.
- Yet, although the rae side looked OK (purple LEDs burning, last prints also OK:
[component_container-1] [INFO] [1725972043.420819022] [battery_node]: Power supply status changed to [Discharging] after 0 h 0 min 0 secs.
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process started with pid [303]
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process has finished cleanly [pid 303]
- Now image_view gives an black window, and the topic list only the default two. Trying a reboot of RAE.
- Instead of image_view, tried rviz2. An old config is loaded, because also a RobotModel is loaded (and package rae-description is not installed).
- Yet, it still seems that the RAE robot crashed directly.
- According to Joey, he saw that behavior after working too long with robot. Let it cool down and charge solved this instable behavior.
-
- Trying Joey's RAE instead. That is not heated up, and has the latest docker image.
- Had some trouble logging in. The USB-connection only works with adapter inbetween, and its ip changed to from *.*.*.140 to 139.
- Logged in. The docker ps also shows that the image is 9 month old, so how can you check the version?
- Was able to use image_view and rviz2. Also drove around with teleop. Teleop froze for a moment (with a message that battery was on 25%, but continued directly after (still had all topics).
- Strange enough, I can still drive, but receive no images. Should build a system to restart per node. Floxglove should be a good start to see what is running.
- Joey already experimented with Floxglove.
- As indicated, the packages is already installed on the RAE side, so ros2 launch foxglove_bridge foxglove_bridge_launch.xml works without problems.
- Installed the foxglove-studio on nb-ros native. It tries to connect the websocket, but complains that Firefox is used.
- With chrome it goes better, although only a subset of topics is shown. For instance, battery_level cannot displayed. Restarting the nodes on the RAE.
- Seems to be intended behavior, because the foxglove_bridge reports:
[foxglove_bridge-1] [INFO] [1725979333.325492495] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/move_base_simple/goal" (geometry_msgs/PoseStamped) on channel 1
[foxglove_bridge-1] [INFO] [1725979333.332678530] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/clicked_point" (geometry_msgs/PointStamped) on channel 2
[foxglove_bridge-1] [INFO] [1725979333.350200156] [foxglove_bridge]: Client 192.168.0.204:46308 is advertising "/initialpose" (geometry_msgs/PoseWithCovarianceStamped) on channel 3
- According to foxglove bridge documentation, there is a topic_whitelist.
- Yet, all topics are per default published. After the restart (and setting ROS_DOMAIN_ID=7 in the foxglove-bridge terminal) more channels are added (although some are removed):
[foxglove_bridge-1] [INFO] [1725980706.741722533] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/move_base_simple/goal" (geometry_msgs/PoseStamped) on channel 4
[foxglove_bridge-1] [INFO] [1725980706.760216859] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/clicked_point" (geometry_msgs/PointStamped) on channel 5
[foxglove_bridge-1] [INFO] [1725980706.767273529] [foxglove_bridge]: Client 192.168.0.204:58436 is advertising "/initialpose" (geometry_msgs/PoseWithCovarianceStamped) on channel 6
[foxglove_bridge-1] [WARN] [1725980706.767956825] [foxglove_bridge]: Some, but not all, publishers on topic '/tf' are offering QoSDurabilityPolicy.TRANSIENT_LOCAL. Falling back to QoSDurabilityPolicy.VOLATILE as it will connect to all publishers
[foxglove_bridge-1] [INFO] [1725980706.768092368] [foxglove_bridge]: Subscribing to topic "/tf" (tf2_msgs/msg/TFMessage) on channel 2
[foxglove_bridge-1] [INFO] [1725980706.787031155] [foxglove_bridge]: Subscribing to topic "/tf_static" (tf2_msgs/msg/TFMessage) on channel 6
[foxglove_bridge-1] [INFO] [1725980722.820361441] [foxglove_bridge]: Removed channel 16 for topic "/cmd_vel" (geometry_msgs/msg/Twist)
[foxglove_bridge-1] [INFO] [1725980722.820630318] [foxglove_bridge]: Removed channel 10 for topic "/joint_state_broadcaster/transition_event" (lifecycle_msgs/msg/TransitionEvent)
[foxglove_bridge-1] [INFO] [1725980722.820692818] [foxglove_bridge]: Removed channel 12 for topic "/diff_controller/transition_event" (lifecycle_msgs/msg/TransitionEvent)
[foxglove_bridge-1] [INFO] [1725980722.820737027] [foxglove_bridge]: Removed channel 13 for topic "/dynamic_joint_states" (control_msgs/msg/DynamicJointState)
- Other channels open when those topics are selected in FoxGlove.
- Both teleop as image worked. The stream is much faster in FoxGlove compared to rviz2.
- Next, looked at Rae-faceTracking.
- The code is without much documentation.
- Cloned the repository, copied the directory in ~/mamba_ws/src (further empty).
- Did a colcon build --symlink-install from my (ros_humble_env). Got a warning: easy_install command is deprecated.
- Sourced the ~/mamba_ws/install/setups.sh, and executed python3 src/Rae-faceTracking/rae_oyster/rae_oyster/pearl_node.py. That gave two eyes on the screen, plus an error message:
File "~/mamba_ws/src/Rae-faceTracking/rae_oyster/rae_oyster/pearl_node.py", line 70, in process_image
face_rects = self.face_cascade.detectMultiScale(frame_gray, 1.3, 5)
cv2.error: OpenCV(4.6.0) /home/conda/feedstock_root/build_artifacts/libopencv_1671406913289/work/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'
- There is also another error-message:
'~/mamba_ws/rae_oyster/resource/haarcascade_frontalface_default.xml
- So, when I start python3 rae_oyster/rae_oyster/pearl_node.py from ~/mamba_ws/src/Rae-faceTracking the facetracking works (including the eyes).
- The install specifies the install requires 'setuptools', 'rclpy', 'image_pipeline', 'face_recognition '. The install also mentions rae_oyster.pearl_node:main.
September 9, 2024
- Checked the provided Ubuntu 22.04 machine.
- Booting halts, but continues after an Enter.
- The machine had no a internet access, so should register the machine at iotroam.
- Also no ifconfig from the net-tools package installed, so checked MAC address with ip link show.
- Running the Dockerfile apt-get install fails on missing packages like: ffmpeg, python3-rosdep, ros-humble-rtabmap-slam.
- Start with installing ros-humble, following install from debs.
- The package locales was already installed. Added en_US as language.
- The package curl was not yet installed. Updated the three update-manager packages that were held back.
- Doing sudo apt install ros-humble-desktop installed 1023 packages.
- Should continue with the Environment setup.
- Added the ros-humble setup to ~/.bashrc. Tested is with ros2 topic list.
- Also run with succes Talker-listener example (which tests both C++ and Python).
- The docker-file only mentions three packages, but including dependencies 47 packages are installed.
-
- Next step is to start the docker file, which requires that docker is installed. That includes 12 new packages. sudo docker run hello-world works.
- Docker daemon runs as root user, so this post-install steps are needed. The first step was not needed, the docker group already existed.
- Still, the image luxonis/rae-ros-robot could not be found.
- Instead, I had to do docker pull ghcr.io/luxonis/rae-ros:v0.4.0-humble, as specified here.
- That was step 1-3 of setting rae-ros, which could be skipped because there is already a docker image uploaded to the robot. Could (tomorrow) continue with step 4
August 22, 2024
August 16, 2024
- Section 2.1Szeliski's book goes deeper into the perspective transformations.
July 16, 2024
July 11, 2024
- Ordered two additional RAE robots (free shipping).
July 1, 2024
- The idea for the exercises are currently as follows.
- Follow the ceiling lights (2 weeks)
- Localisation on traffic-cones, playing Curling (2 weeks)
- Conquer the flag in a maze (3 weeks)
- There will be summer-TA and three TAs preparing during period 1.
- The werkcolleges will be 1h on the mathematics, 2nd hour a demo on what is expected in the practicals.
- I will give the lectures in week 1-4, together with Shaodi in the first and last week.
June 27, 2024
- Reconnected to the RAE again, following the steps from May 27.
- Logged in the ip-adress given by the hub
- Last update of rae-ros github is 3 months ago, so I do not have to update.
- Did ros2 launch rae_bringup bringup.launch.py, which should also bringup the slam_toolbox.
- See a lot of error-messages:
[component_container-1] [ERROR] [1719490274.253082247] [rae]: No available devices (3 connected, but in use),br>
[component_container-1] [INFO] [1719490274.253365414] [rae]: No ip/mxid specified, connecting to the next available device.
- Could be an error because the robot is still connected with the USB-C (to charge the battery, although the live view inidcates that the battery is on 100%). Live view depletes the battery fast (down to 92% in a minute). System is frozen.
- When starting ros2 launch rae_bringup robot.launch.py I don't see any errors.
- Restarted nb-dual, because WSL didn't start.
- Logged in to rae, and checked if docker was running with docker ps. Nothing running.
- Started again docker run -it -v /dev/:/dev/ -v /sys/:/sys/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' --network host luxonis/rae-ros-robot:humble.
- The topic are not visible in my WSL Ubuntu 22 (with ROS2 Humble), because WSL2 is using another ip-adress.
- Looked at the suggestion given in this discussion.
- Hyper-V was not active on my machine, so followed first the instructions for Windows 10. The first two options (via PowerShell) failed, via settings I got a warning about .NET Framework 3.5, which was indeed not installed (and was activated in the Hyper-V instructions). Also selected that component and tried the installation again.
- After the reboot the Dot-Net Framework 3.5 was installed, but the Hyper-V component not. Tried again, but received the same error.
- Could better try to get the ros-bridge working (natively). See tutorial.
- Before I go there, a last attempt. WSL networking suggest that Mirrored mode networking is possible.
- The WSL manual suggest that .wslconfig should be modified.
- Yet, the setting networkingMode=mirrored has a footnote only for Windows11.
- Another option could be port forwarding, with netsh, as suggested WLS networking documentation
- This long discussion suggested two option (use WSL1 or use port forwarding script). The 2nd option is difficult, because WLS2 will use random ports.
- Trying the suggested backup and restart as WLS1.
- Importing the backup gave an ACCESSDENIED error.
- Switching the version didn't work, because I had done wsl -t Ubuntu-22.04 followed by wsl --unregister Ubuntu-22.04.
- Instead, did wsl --set-default-version 1 followed by wsl --install Ubuntu-22.04. That gave:
Ubuntu 22.04 LTS is al geïnstalleerd.
Ubuntu 22.04 LTS starten...
- Had to install net-tools, but after that I could check that I have now a WLS1 Ubuntu22.04 version which can connect to LAB42.
- Installed the key-ring and did sudo apt update.
- Did sudo apt install ros-humble-desktop (3Gb).
- Could still login to rae, although the docker was already gone.
- Started the docker and the script, but got several errors that device was already in use. Decided for a reboot of RAE. Still, the same errors.
- Connrected to the LiveView of the Luxonis hub. Now I get both front- and back-image streams, plus I could control the wheels.
- My fault that I was trying the whole bringup, instead of just the robot
- The launch starts (although still some microphone / PCM errors). Yet, I do not see them on WSL1, while I can ping the robot's ip.
- Another option is to use a Discovery server, as this tutorial.
- Rebooted the RAE for a clean start. Running fastdds discovery --server-id 0 --port 11888 in the container.
- Now ros2 topic list gives only the /rosout both in the container and on my WSL1.
- Also did the trick with the SUPER_CLIENT config. Now ros2 topic list shows the /msg topic on both sides, yet ros2 topic echo /msg shows nothing.
- Reset the Windows system-time, which also updates the WSL1 time. Still no echo.
-
- In the mean-time, what should be the assignments.
- Start with camera calibration, including back-projection of a cube on a chessboard. Could be the standard OpenCV chessboard, or the charuco from Luxonis (one week).
June 21, 2024
- Looking at missing functions like gluQuadricNormals. It was mentioned in the OpenGL 2.1 reference manual.
- FreeGlut was started in 1999, because GLUT became old. The current version is 3.6.0, but the apt install says that 2.8.1 is the latest version.
- Looked with ldd ./scenelib2/libscenelib2.so | grep gl and /lib/x86_64-linux-gnu/libglut.so.3 is included.
- Looked inside /lib/x86_64-linux-gnu/libglut.so.3, but it only contains glut-functions, not glu-functions.
-
- Trying to install SceneLib2 on Tunis (Ubuntu 12.04)
- Also trying an older version (--branch v0.3).
- Had to modify the c++ standard to -std=c++0x in the CMakelists.txt of SceneLib2.
- Installed opencv 2.3 (instead of expected 2.4.2) with sudo apt-get install libcv2.3 libcv-dev libopencv-dev . Only gave an error with the usbcamgrabber, with the YUV 422 color. Commented three lines out.
- The executable MonoSlamSceneLib1 is build.
- Run the executable on the TestSeqMonoSLAM. When I do 'Display Trajectory' and 'Toggle Tracking' nice things happen.
- Downloading the smallest (#13 - 2Gb) of the outdoor_rides from the frodobot-2k-dataset.
-
- While downloading I looked if I could do the same trick with v0.3 of Pangolin at nb-dual.
- When doing cmake .. I get the warning that there are two OpenGL libraries available libGL.so libOpenGL.so:
OpenGL_GL_PREFERENCE has not been set to "GLVND" or "LEGACY", so for
compatibility with CMake 3.10 and below the legacy GL library will be used.
- Pangolin v0.8 is the first that can be build on nb-dual. With this version also SceneLib2 can be build.
- Run MonoSlamSceneLib2 on nb-dual, on the Ubuntu 22.04 WSL partition:
June 20, 2024
- Read the remainder of Chapter 14 of Peter Corke. The chapter covers both sparse and dense stereo, disparity and image rectification.
It also covers point-clouds. At the end three applications are describedL: perspective correction, image mosaicing and visual odometry.
- In that light the exercises are not very robot-oriented. The only exception is assignment 14.3, on visual odometry.
- For MonoSLAM Peter points to SceneLib (May 2006). Andrew Davison points to the reimplementation SceneLib2 (2012).
- The group also have updated the algorithms, such as CodeSLAM. Yet, it seems a paper without code.
- Looking around, I found this LSD-SLAM, which is supports MonoSLAM (but requires a cameraCalibration).
- An extension of LSD is DSO-SLAM, including ROS wrapper. Not only needs a camera calibration, but also gamma-correction and vignette.
-
- It would be interesting to see if SceneLib2 still works for Ubuntu 22.04 (originally Ubuntu 12.04 / 14.04) and if it also works for the dataset of Earth Rover challenge.
- Looking at my WSL partition on nb-dual.
- For Step 1 no new packages had to be installed, I had them all as latest version.
- Pangolin had to be installed from source
- The Pangolin install_prerequisites.sh script installed 4 packages libc++-14-dev libc++1-14 libc++abi1-14 libunwind-14 libunwind-14-dev.
- The python installation gives some problems. It tries to install pypangolin==0.9.1, which uninstalls pybind11 (v2.12.0), pillow (v10.3.0), numpy (v1.26.4), and installs numpy-2.0.0, which gives some incompatible problems with this version of numpy. Yet, it ends with Successfully installed.
- All 15 tests were succesfully passed.
- Back to cloning the source code of SceneLib2 with git clone --recursive https://github.com/hanmekim/SceneLib2.git
- Cmake failed on /home/arnoud/git/Pangolin/build/PangolinConfig.cmake, because Eigen3 could not be found.
- The trick to add this discussion to add REQUIRED NO_MODULE worked.
- Yet, the make fails horrible on Pangolin/components/pango_core/include/sigslot/signal.hpp
- Used the trick from this discussion and updated the C++ version in the CMakeLists.txt from SceneLib2 from std=c++11 to c++14.
- Next are some version problems of OpenCV in the usbcamgrabber.cpp . Upgraded the calls from cvtColor() and resize(). The framegrabber works.
- Next were some old calls on HandleInput(), with I replaced with !ShouldQuit().
- Next -lBoost::thread could not be found. Adding set(Boost_USE_STATIC_LIBS ON) bringed me further, but when building shared library libscenelib2.so I now get a bad value for /usr/lib/x86_64-linux-gnu/libboost_thread.a
- In sceneLib2/CmakeLists.txt the Boost dependencies are defined. That library is build without problems. Maybe should also add that Boost dependency in examples/CmakeLists.txt. That works for Boost. Now some glu-functions are missing.
June 19, 2024
June 6, 2024
- Finished the remainder of Chapter 13 of Peter Corke's book. In exercise 5 the calibration of section 13.2.1 is redone. Would be nice to do that for the RAE.
- Tried the suggested python script by Luxonis, but the wsl subcommand doesn't exist anymore.
- I am not the only one with this problem, although SpudTheBot describes a solution. The camera reboots, so it gets a new busid. So, time to write an updated version of the suggested script.
- No success, time to go native. Repeated python3 ColorCamera/rgb_preview.py on my Ubuntu 20.04 partition, but same error. Try again, without adapter in between.
- Note that while installing the requirements, I got the error-message:
machinevision-toolbox-python 0.9.6 requires opencv-python, which is not installed.
robothub-sdk 0.0.3 requires opencv-python, which is not installed.
I think that the actual dependence is on opencv-python3, and it doesn't affect the demo.
- The rgb-demo works!
- Continue with Luxonis calibration guide.
- Did a git submodule update --init --recursive, which updated the boards-directory.
- The command python3 install_requirements.py updated opencv_contrib_python (v4.5.5.62) and depthai (v2.21.2.0).
- Next step would be to display a Charuco board on a TV screen. Could use nb-ros and the Mitsubishi screen (42 inch) for that (tomorrow).
June 5, 2024
- Build a 3D calibration cube from three chessboard patterns.
- Followed the instructions of Luxonis to install it on WSL2
- From usbipd, I downloaded v4.2.0
- Followed WSL support tricks.
- In a PowerShell (admin), usbipd list shows the Movidius MyraidX (not shared).
- In my case I had to do usbipd bind --busid 6-4. Now the device is Shared.
- Next is usbipd attach --wsl --busid 6-4 in a new PowerShell (user). Now the device is Attached. Note that the attached indicated that I can use IP address 172.21.224.1 to reach the host.
- I can see the device with lsusb in WSL. No further client-side tooling are required.
- No need of python3 -m pip install depthai, that library was already installed.
- Still had to do git clone https://github.com/luxonis/depthai-python.git
- Moved to depthai-python/examples. python3 install_requirements.py installed several packages, including opencv 4.10.
- Yet, python3 ColorCamera/rgb_preview.py fails. The device is also no longer visible with lsusb.
- In PowerShell the device is no longer Attached, only shared. The busid changed, but the device is gone again when tried a 2nd time. Python script fails with RuntimeError: Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND.
June 4, 2024
- Found an Inria paper (2001) which at least uses the 3D calibration target.
- This paper from 2018 uses the same 3D calibration target, based on applying 3 times Zhengyou Zhang's method.
- This 1997 paper uses a 2 surface target with squares instead of circles.
- The same 2 surface target is used paper from 1992.
- This 2022 paper uses a 3D checkerboard instead of circles.
- The point to several github pages.
- This repository combines a Velodyne VLP-16 with a Depth camera, including video-tutorial.
- This repository has not only a calibration-part, but also can simulate a 2D-laser from the 3D point cloud (7y old).
- The last repository is from 2022, specialised for autonomous vehicles, and uses a new proposed calibration target:
- This Structure from Motion lecture from 2008 uses the 3D circle pattern. Also points back to Zhengyou Zang's method.
- As alternative, he points to Criminisi, Reid and Zisserman, where I recognize the wooden hut still used by Scaramuza.
June 3, 2024
- Read the OpenCV blog on camera calibration. Included a nice image of the calibration pattern of the Mars rover, including patches for color and size:
- It also points to calib.io, which allows to print other calibration patterns than the standard checkerboard (and allows to add finder markers and radon checkers).
-
- To get Rerun working on WLS2, I also should install the additional packages with sudo apt-get -y install \
libvulkan1 \
libxcb-randr0 \
mesa-vulkan-drivers \
adwaita-icon-theme-full, according to the rerun troubleshooting.
-
- Reading Chapter 13 of Peter Corke's book. Its about Image Formation, with in section 13.1.5 the projection of a cube what was the first assignment in Zurich's course.
- Section 13.2.1 does the calibration (easy to understand) with a 3D target.
- Looking for this 3D calibration target, but seems not to be popular anymore.
- The paper Camera Calibration Methods Evaluation Procedure
for Images Rectification and 3D Reconstruction (2008) points back to the Faugeras-Toscani method, (see The Calibration Problem for Stereoscopic Vision (1989).
- Figure 13.10 with the 3D calibration target is from Fabien Spindler.
- This PhD thesis on Pose estimation of rigid objects and robots (2023) point to Spindler's Pose Estimation for
Augmented Reality: A Hands-On Survey, which is a nice overview paper. But not with the target.
- In 2018 Fabien Spindler gave a tutorial at ICRA, on object tracking.
- THe tutorial points back to code of the blob-tracking tutorial (C++) and pose estimation tutorial (also C++).
- Peter Corke's figures are coming from the Camera calibration tutorial (flat pattern, not the 3D pattern).
- Ther is also using ViSP on Naoqi tutorial!
- And a Bridge over OpenCV tutorial
- I like the Image projection tutorial, which is the first Zurich assignment, but now for only 4 circular points.
- There is also a tutorial on tracking with RGB-D camera
- Which is even more fun when you use your own cube with an AprilTag.
- There is also a ros1 bridge
- The ROS Rolling version seems to be ROS2 compatible, although the documentation is not update completly. Yet, the installation instructions mention ros-humble-visp.
May 31, 2024
- Looking into Dockerfile, which should be updated first.
- Working in the container on RAE.
- Most packages were up-to-date, except ros-humble-rmw-cyclonedds-cpp, which also installs acl libacl1-dev libattr1-dev ros-humble-cyclonedds ros-humble-iceoryx-binding-c ros-humble-iceoryx-hoofs
ros-humble-iceoryx-posh (Docker uses --no-install-recommends)
- Also gstreamer1.0-plugins-bad needed an update, which also installed libgstreamer-plugins-bad1.0-0 (485 packages to be updated).
- Also ros-humble-rtabmap-slam needed an update (no side-effects)
- For one reason the package unzip was not installed.
- Installing ffmpeg also installed libavdevice58 libavfilter7 libcdio-cdda2 libcdio-paranoia2 libcdio19 libmysofa1 libpocketsphinx3 libpostproc55
librubberband2 libsphinxbase3 libvidstab1.1 libzimg2 pocketsphinx-en-us
- Also ros-humble-image-proc and git were not installed
- Installing libsndfile1-dev also installed libflac-dev libopus-dev libvorbis-dev
- Running pip3 didn't work, first had to install python3-pip, which also installed python3-wheel.
- Lost the connection, the state of the docker was not saved.
- Again did git clone https://github.com/luxonis/rae-ros.git and tried MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_hw, which failed on SNDFILE_INCLUDE_DIR-NOTFOUND
- Installed apt-get install libsndfile1-dev libsndfile1, as in the Dockerfile.
- While building rae_hw, I get hte warning that rae_msgs is in the workspace, but is used from the following location instead: /ws/install/rae_msgs.
- Yet, the connection broke again before the package was finished. Tried again. The make failed on /home/root/git/rae-ros/rae_hw/include/rae_hw/peripherals/mic.hpp:5:10: fatal error: rae_msgs/srv/record_audio.hpp
- So, tried MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_msgs first. That package is build without problems.
- The rae_hw still fails on building the mic_node. Commented this node out from the CMakeList.txt
- Next node to fail is the speaker_node, which fails on request->file_location and request->gain in rae_msgs::srv::PlayAudio_Request
- Afraid that I should have done first source ~/git/rae-ros/install/setup.sh before building the rae_hw package, sothat the correct rae_msg are loaded.
- Made sure that this script worked, by creating install/rae_hw/share/rae_hw/local_setup.sh (copy from rae_msgs) and install/_local_setup_util.py (copy of _local_setup_util_sh.py)
- Still the gain-error. Also removed the speaker_node for the moment.
- Next to fail is the led_node, because from the ros package std_msgs the member color is missing.
- Upgrading all 485 packages, including several ros-humble-*-msgs.
- I also see python3-rosdistro-modules.
- The packages ros-humble-rtabmap and ros-humble-rtabmap-util were kept pack, so installed them manually.
- Still the same error for the led_node. Also commented out this node in CMakeList.txt
- That worked, only one warning on a Unused variable.
-
- Continued with git clone --branch rvc3_calibration https://github.com/luxonis/depthai.git.
- Did apt-get install python3-pip (still in the container of RAE).
- Next is python3 install_requirements.py. That for instance installs opencv_contrib_python 4.5.5 and numpy 1.26.4.
- Note that I should add /root/.local/bin to PATH. Kicked out of the RAE again.
- Tried again. First build rae_msgs (after installing libsndfile1). Takes 3m42s
- Had to _local_setup_util.py, but thereafter source install/setup.sh works.
- Alos the build of rae_hw goes now better. After 9min32 its already completed the build, with only three warnings on unused-variables. Had to add some links to start-up scripts. Started ros2 launch rae_bringup robot.launch.py from the container. Got some errors of no available containers. Should have run source entrypoint.sh first. Kicked out of the container (not of RAE).
-
- Time for lunch.
- Tempted to create a new user (humble), give it sudo-rights and try to install depthai there.
- Yet, no apt, no apt-get, no git.
- Yet, dpkg and wget are there. '
- Looked with /etc/os-release. The codename is "dunfell", the pretty name "Luxonis OS 1.14". Actually, dunfell seems to be a LTS version from the Yocto project. Yet, dunfell had python 3.5 and tar 1.28, while Luxonis OS 1.14 is based on python 3.8.13 and tar 1.32.
- Looked at the Luxonis container page, and saw that there is also a v0.4.0
- Changed /home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml from v0.2.2 to v0.4.0 and rebooted
-
- The Docker also is build with Spectacular AI.
- Should try some of the examples
- Cannot see the difference after the reboot, both uname -a and more /etc/os-release gave the same output.
- Connected another terminal to the container. Still ros2 launch rae_bringup robot.launch.py fails
- A quite recent container is rae-cross-compile.
- On WSL Ubuntu 22.04, I did docker pull ghcr.io/luxonis/rae-cross-compile:dev_0.4.0_humble.
- Entered the rae-cross-compile container at WSL, but colcon is not known, and python3-colcon-common-extension not known.
-
- Instead, on WSL, I did git clone --branch rvc3_calibration https://github.com/luxonis/depthai.git.
- Run python3 install_requirements.py. Running the deptai_demo.py doesn't work (no OAK device connected to USB), but from rae_sdk.robot import Robot fails from missing rclpy (so rae_sdk is found).
- Sourced again source /opt/ros/humble/setup.bash. rcply now works, it fails now on missing rae_msgs.
- Build the rae_msg from the github. After running install/setup.bash, I see /home/arnoud/git/rae-ros/install/rae_msgs/local/lib/python3.10/dist-packages at the front of PYTHONPATH.
- Next failure is missing ffmpeg. Did python3 -m pip install openai ffmpeg-python.
- Next failure is missing depthai_ros_py_bindings. Trying again with making rae_hw. That fails on Cmake cannot find an internal package.
- Instead, I did git clone https://github.com/luxonis/depthai-ros.git. Looked at the documentation. Instead, did sudo apt install ros-humble-depthai-ros. If I want to build my own docker image, I can look this documentation
- Looking around at RAE container, but couldn't find the ros-py-bindings. Installed apt install python3-pip again, and tried to do python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai. Complained that the extra-index-url is not correctly installed, but now I can do iport depthai I still get a warning on USB protocol not available.
- Tried to run python3 yolov4_publisher.launch.py, which gave no errors. Same for stereo_inertial_node.launch.py.
- The command ros2 launch depthai_filters example_seg_overlay.launch.py starts. Some nodes crashes (no available devices), but I get at least the topics /joint_states
/parameter_events
/robot_description
/rosout.
May 27, 2024
- Started the RAE, was very happy with the startup sound.
- Checked that the Default App is running at Luxonis control hub.
- Logged in the ip-adress given by the hub.
- Started docker run -it -v /dev/:/dev/ -v /sys/:/sys/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' --network host luxonis/rae-ros-robot:humble.
- Did a dmesg | tail, which gives alloc_contig_range: [a6800, a68e1) PFNs busy, no idea if that is good or bad.
- Again, the command ros2 launch rae_bringup robot.launch.py seems to fail:
[ros2_control_node-5] [INFO] [1716817013.098112201] [resource_manager]: Initialize hardware 'RAE'
[ros2_control_node-5] terminate called after throwing an instance of 'std::system_error'
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
[mic_node-8] [FATAL] [1716817013.743320487] [mic_node]: Unable to open PCM device: Device or resource busy
[mic_node-8] [INFO] [1716817013.744430406] [mic_node]: Mic node running!
[mic_node-8] mic_node: pcm.c:1636: snd_pcm_readi: Assertion `pcm' failed.
[rae]: No available devices (2 connected, but in use)
- Because also the mic-node fails, maybe my attempt to give a startup sound failed.
- Removed the crontab, by replacing the command with an empty file (in /var/spool/cron/crontabs). The command crontab -l gives no output (also no warning).
- Rebooted the system, still says ping (from /etc/rc.local).
- That partly helps, the LED node is now running (ring becomes purple). Yet, the MIC node and control node still fails
- Removed also /etc/rc.local. No ping anymore.
- That helped, now the PGM could be reset:
[mic_node-8] [INFO] [1716819090.305976605] [mic_node]: Mic node running!
[ros2_control_node-5] [INFO] [1716819090.325092897] [resource_manager]: Initialize hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.329573230] [resource_manager]: Successful initialization of hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.329928647] [resource_manager]: 'configure' hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331642272] [resource_manager]: Successful 'configure' of hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331757355] [resource_manager]: 'activate' hardware 'RAE'
[ros2_control_node-5] [INFO] [1716819090.331794897] [resource_manager]: Successful 'activate' of hardware 'RAE'
[component_container-1] [INFO] [1716819090.595384147] [battery_node]: Battery node running!
[ros2_control_node-5] [INFO] [1716819094.063310815] [controller_manager]: Loading controller 'diff_controller'
[spawner-6] [INFO] [1716819094.783641232] [spawner_diff_controller]: Loaded diff_controller
[ros2_control_node-5] [INFO] [1716819094.793637357] [controller_manager]: Configuring controller 'diff_controller'
[spawner-6] [INFO] [1716819094.880909982] [spawner_diff_controller]: Configured and activated diff_controller
[INFO] [spawner-6]: process has finished cleanly [pid 162]
[ros2_control_node-5] [INFO] [1716819095.976911733] [controller_manager]: Loading controller 'joint_state_broadcaster'
[spawner-7] [INFO] [1716819096.100183902] [spawner_joint_state_broadcaster]: Loaded joint_state_broadcaster
[ros2_control_node-5] [INFO] [1716819096.112847773] [controller_manager]: Configuring controller 'joint_state_broadcaster'
[ros2_control_node-5] [INFO] [1716819096.113834039] [joint_state_broadcaster]: 'joints' or 'interfaces' parameter is empty. All available state interfaces will be published
[spawner-7] [INFO] [1716819096.201486699] [spawner_joint_state_broadcaster]: Configured and activated joint_state_broadcaster
[INFO] [spawner-7]: process has finished cleanly [pid 164]
[component_container-1] [INFO] [1716819098.354302116] [rae]: Camera with MXID: xlinkserver and Name: 127.0.0.1 connected!
[component_container-1] [INFO] [1716819098.354530036] [rae]: PoE camera detected. Consider enabling low bandwidth for specific image topics (see readme).
[component_container-1] [INFO] [1716819098.386023148] [rae]: Device type: RAE
[component_container-1] [INFO] [1716819098.677438728] [rae]: Pipeline type: rae
[component_container-1] [INFO] [1716819100.112680251] [rae]: Finished setting up pipeline.
[component_container-1] [INFO] [1716819101.050747669] [rae]: Camera ready!
[INFO] [launch.user]: Resetting PWM.
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process started with pid [283]
[INFO] [busybox devmem 0x20320180 32 0x00000000-10]: process has finished cleanly [pid 283]
[component_container-1] [INFO] [1716819106.099834032] [battery_node]: Battery capacity: 30.000000, Status: [Discharging] for 0 h 0 min 5 s. Time since last log: 0 h 0 min 15 secs.
- Yet, still warnings:
[component_container-1] [2024-05-27 14:11:31.049] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[component_container-1] [WARN] [1716819281.600596252] [battery_node]: Battery status low! Current capacity: 25.000000
- The USB-protocol could maybe be activated with the commands at the end of rae-ros documentation: gpioset gpiochip0 44=1, echo host > /sys/kernel/debug/usb/34000000.dwc3/mode.
-
- In another terminal, I attached to the running container image with docker exec -it CONTAINER_ID zsh (CONTAINER_ID can be found with docker ps).
- The LED-ring goes off, but I still see all topics published, including cmd_vel, rae/imu/data and /imu/data, a depth_image, /rae/left_back/image_raw and /rae/right/image_raw.
-
- Running ros2 run teleop_twist_keyboard teleop_twist_keyboard in the container seems to work (mounted on a mug).
- The python interface doesn't work inside the container.
- Instead, went to WSL Ubuntu 22.04's ~/git/rae-ros/rae_sdk. Did there python3 setup.py build, followed by python3 setup.py install --user
- That helps, because in python 3.1012 the call from rae_sdk.robot import Robot now fails on ModuleNotFoundError: No module named 'rclpy'
- That can be solved by source /opt/ros/humble/setup.bash. Now the import fails on ModuleNotFoundError: No module named 'rae_msgs'.
- Looked into ../rae_msgs, but no python modules there.
- Back to the container on RAE. Made a /home/root, created /home/root/git, git clone https://github.com/luxonis/rae-ros.git. In the ~/git/rae-ros/rae_sdk directory, I did again python3 setup.py build, followed by python3 setup.py install
- This gave a new error message:
File "/home/root/git/rae-ros/rae_sdk/rae_sdk/robot/led.py", line 2, in
from rae_msgs.msg import LEDControl, ColorPeriod
ImportError: cannot import name 'ColorPeriod' from 'rae_msgs.msg' (/ws/install/rae_msgs/local/lib/python3.10/dist-packages/rae_msgs/msg/__init__.py)
- Tried to do git clone https://github.com/luxonis/rae-ros.git --depth 1 --branch v0.2.2-humble to get a version that corresponds with version specified as image in the Default app. Yet, checking led.py with blame shows that the ColorPeriod was always there.
- When I look here, ColorPeriod.msg
- Checked out the latest version again with git clone https://github.com/luxonis/rae-ros.git and tried the suggested in development on docker-section of the documentation: MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-select rae_hw. The build fails on SNDFILE_INCLUDE_DIR-NOTFOUND. Instead did the same command, but now for rae_msgs. That build finished without any problems.
- Tried to do source install/setup.bash, but that is looking for /home/root/git/rae-ros/local_setup.bash. Going into the install-directory doesn't help, now /opt/ros/humble/_local_setup_util_sh.py is not found.
- Yet, it has effect, in Python3 from rae_sdk.robot import Robot now fails on 'No module named 'depthai'.
- Did apt install python3-pip, followed by python3 -m pip install depthai. v2.26.0.0 is installed.
- Next missing module is 'depthai_ros_py_bindings'.
- That module cannot be installed with . Found this post on depthai examples on the RAE.
- According to , it should be apt install ros-humble-depthai-ros. Got some errors on arm64 architecture, with the suggestion to do apt update first. That works, but still the bindings cannot be found.
- There are 479 packages to be updated, but that will take a while. Maybe better to the Docker instructions one-by-one
May 24, 2024
- Should start the docker with the commands found rae-ros documentation
- Looked at the github, the latest update was on March 28 (issue #75, imu correction).
- Tried docker buildx build --platform arm64 --build-arg USE_RVIZ=0 --build-arg SIM=0 --build-
arg ROS_DISTRO=humble --build-arg CORE_NUM=10 -f Dockerfile --squash -t rae-ros-robot:humble-imu-corr --load .
- First had to install docker at WSL Ubuntu 22.04
- Although I did sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes, still docker buildx build --platform arm64 gave unknown flag: --platform
- Tried to skip the first two steps by just doing docker pull luxonis/rae-ros-robot:humble but even that failed (no permision on the socket, maybe becuause it did the first docker run with sudo rights.
- Doing an upgrade on all pending packages.
- Applied this trick (adding myself to the docker group).
- Now docker pull luxonis/rae-ros-robot:humble works. Actually, that docker image is one build on 29 Nov 2023, so quite old.
- Next step is to copy that file to rae. The command to do that is specified in step 3, but from this post I added | bzip2 | pv | in between.
- I hope it fits, because the image is 5.82 Gb, while the disk has only 2.7 Gb capacity (2.3 Gb used).
- Strange enough, the docker images from the Default app are not visible.
- In principle the docker buildx should work, --platform is one if its options.
- After sudo apt install docker-buildx the second steps works.
- Strange enough, all my Ubuntu terminals were gone after lunch.
- Uninstalling the 2nd Default app.
- ssh to rae. Now at least the image appears (5 months old), when doing docker image ls.
- Run docker run -it --restart=unless-stopped -v /dev/:/dev/ -v /sys/:/sys/ --privileged --net=host luxonis/rae-ros-robot:humble. Entered a shell inside this image. Could do ros2 topic list, which gives only /parameter_events
/rosout.
- Started another Ubuntu terminal, ssh into rae, created another session with docker exec -it zsh. In that session I did ros2 launch rae_bringup robot.launch.py
- In the original session I could do ros2 topic list again, which now gives:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/lcd
/leds
/odometry/filtered
/parameter_events
/rae/imu/data
/rae/imu/mag
/rae/left_back/camera_info
/rae/left_back/image_raw
/rae/left_back/image_raw/compressed
/rae/left_back/image_raw/compressedDepth
/rae/left_back/image_raw/theora
/rae/right/camera_info
/rae/right/image_raw
/rae/right/image_raw/compressed
/rae/right/image_raw/compressedDepth
/rae/right/image_raw/theora
/rae/stereo_back/camera_info
/rae/stereo_back/image_raw
/rae/stereo_back/image_raw/compressed
/rae/stereo_back/image_raw/compressedDepth
/rae/stereo_back/image_raw/theora
/rae/stereo_front/camera_info
/rae/stereo_front/image_raw
/rae/stereo_front/image_raw/compressed
/rae/stereo_front/image_raw/compressedDepth
/rae/stereo_front/image_raw/theora
/robot_description
/rosout
/set_pose
/tf
/tf_static
- Did a ros2 topic list in WSL, but only the default two topics are shown. Yet, no ROS_DOMAIN_ID is specified, so both are running domain 0, as explained in this ConstructSim lesson.
- Reboot, and look if I could see it when I use native Ubuntu.
- Could not login, but the Luxonis hub also gave status unknown, and the ring was purple.
- Rebooted the RAE. Now the hub indicates Running again. Could also login with via USB with ssh root@192.168.197.55
- No ros2 on nb-dual native Ubuntu20.04, so went to WS9.
- Started ros2 launch rae_bringup robot.launch.py, which showed a subset of the topics (but it WORKED!)
- At startup, I get a number of errors:
[ros2_control_node-5] what(): error requesting GPIO lines: Device or resource busy
component_container-1] [2024-05-24 11:52:18.518] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[mic_node-8] [FATAL] [1716551538.531122684] [mic_node]: Unable to open PCM device: Device or resource busy
[component_container-1] [ERROR] [1716551542.707475672] [rae]: No available devices (3 connected, but in use)
[component_container-1] [INFO] [1716551542.707825674] [rae]: No ip/mxid specified, connecting to the next available device.
- Yet, this time I started with the first run command from step 4, while previous I used the second run command (including the device-cgroup-rule). Still, there are some errors.
- Trying a reboot of RAE. Even after a reboot some devices are not available.
- This is the list of topics visible from WS9:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/odometry/filtered
/parameter_events
/rae/imu/data
/robot_description
/rosout
/set_pose
/tf
/tf_static
- When I do ros2 node list I get:
/battery_status
/diagnostics
/diff_controller/odom
/imu/data
/joint_states
/odometry/filtered
/parameter_events
/rae/imu/data
/robot_description
/rosout
/set_pose
/tf
/tf_static
- With the GPIO pins, I could try to unexport, as described in this post. Yet, rebooting seems to be enough.
- Note that the ekf-settings, which point to this localisation algorithm
- Also note that with the full stack, I can also activate the ros_bridge which should allow me to communicate with ros1-nodes.
- Playing a sound on RAE works with gst-launch-1.0 playbin uri=file:///home/root/sounds/complete.oga, which I downloaded from freedesktop.
- Adding the sound via a crontab doesn't work, because such directory doesn't exist.
- Used the trick at the end of this post to set the cronjob. Yet, doesn't seem to work
- Doing the same trick via /etc/rc.local as suggested here worked.
- Tried again, but now with simply ros2 launch rae_hw control.launch.py. Yet, I get both:
ERROR] [ros2_control_node-4]: process has died [pid 158, exit code -6, cmd '/opt/ros/humble/lib/controller_manager/ros2_control_node --ros-args --params-file /tmp/launch_params_z1bgybcn --params-file /ws/install/rae_hw/share/rae_hw/config/controller.yaml -r /diff_controller/cmd_vel_unstamped:=cmd_vel'].
[ERROR] [mic_node-8]: process has died [pid 166, exit code -6, cmd '/ws/install/rae_hw/lib/rae_hw/mic_node --ros-args'].
- So teleop doesn't work. I see the lcd and led_node.
- Tried the python code inside the container on the RAE. Still: no module rae_sdk found.
- Inside the container I have python3.10.12
- Still in the container, I did apt install python3-pip
- Yet, installing rae-sdk didn't work.
- Searched for the sdk both on the bare rae and container. Only found /usr/lib/robothub/sdk/python, which contains the default app code.
- Ivo suggested in his post that the rae_sdk should be somewhere in /ws/build, but I do not see it.
- Try next week to build a new docker file, which is not 5 months old.
- Note that in the entrypoint, which is run inside the container, the do echo ${channel} > /sys/class/pwm/pwmchip0/export.
May 23, 2024
- Booted up the RAE with the USB-charging. Starts to drive on one wheel. Reboot.
- Was able to ssh as root into RAE.
- According to RAE python SDK, I should replace the image in the robotapp.toml.
- Yet, there are several robotapp.toml files:
/usr/lib/robothub/builtin-app/robotapp.toml - Dec 22
/home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml - May 22
/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml - May 22
/data/persistent/home/robothub/data/sources/ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3/robotapp.toml - May 22
/data/persistent/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml - May 22
- This could be the two Default Agents - both trying to start
- First try to do the suggestions from this discussion.
- Could start python3, but from rae_sdk.robot import Robot (no module rae_sdk)
- Looked into
/home/robothub/data/sources/cfd32267-e4e4-4fb2-8559-6a581da2a077/robotapp.toml because that was installed first.
- Can see that is v2.0 of the Default agent, and that the image is already ghcr.io/luxonis/rae-ros:v0.2.2-humble. Same for ae753e9f-version., including th copies at /data/persistant. Only the one in
/usr/lib/robothub/builtin-app/robotapp.toml points to "ghcr.io/luxonis/robothub-app-v2:2023.218.1238-rvc3"
- Note that humble-version also contain som pre-launch commands:
export ROS_DOMAIN_ID=30 . /opt/ros/$ROS_DISTRO/setup.sh . /ws/install/setup.sh
- Checking env | grep ROS gives no response. The Luxonis hub indicates that ae753e9f-23d4-4283-9bf1-fb6bdce4a3d3 is running.
- Checking if I can find those setup.sh. Found them in:
./home/robothub/containers/overlay/7766e7ee54cc4fb418d6658cde54a7af7bca8dd5c6176b29e2a494c5df7c45d0/diff/sai_ros/spectacularai_ros2/install/setup.sh
./home/robothub/containers/overlay/2e309fc781e1adc67486b2d7bcbd209eef185988c3881c075a996ef8df033204/diff/opt/ros/humble/setup.sh
./home/robothub/containers/overlay/293f64214d5d1a98b385cddfe269d9a5ab59e1165bde64270280b844fc07f6a5/diff/ws/install/setup.sh
./home/robothub/containers/overlay/ce5ad1b5e3efd36576fa12656851b3ad762100188603ff5abfa17aead1ce5c5d/diff/opt/ros/humble/setup.sh
- And in the corresponding /data/persistent
- Checked with docker container ls but no containers are running. Also see nothing with docker image ls
- Should start the docker with the commands found rae-ros documentation
-
- Read Chapter 9 and 10 from Dudek and Jenkin. Both Pose maintenance and Mapping point back to quite old algorithms (FastSlam at best). Also the assignments are not updated to ROS.
May 22, 2024
- Used WS9 to do the TurtleBot example from rerun, but without the rerun node.
- Installed sudo apt install ros-humble-navigation2 ros-humble-turtlebot3 ros-humble-turtlebot3-gazebo, which installed 193 new packages.
- After that I did:
source /opt/ros/humble/setup.bash
export TURTLEBOT3_MODEL=waffle
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/opt/ros/humble/share/turtlebot3_gazebo/models
- Followed by ros2 launch nav2_bringup tb3_simulation_launch.py headless:=False.
- Rviz started up, but complained with Global Status error (Frame map doesn't exis) and in the log on a missing transform from base_link to odom.
- Set the Fixed Frame of rviz to base_link, but that gave a map-error.
- Explictly created a transform between base_lint to odom with . Yet, rviz responded with Message Filter dropping message: frame 'odom' at time 0.000 for reason 'discarding message because the queue is full'.
- In Gazebo I see the world, but I don't see a TurtleBot.
- Looked at nav2 getting started.
- Checked /opt/ros/humble/share/turtlebot3_gazebo/models/turtlebot3_waffle, but model is there. Also TURTLEBOT3_MODEL and GAZEBO_MODEL_PATH are OK.
- Looked into the log. Actually, /opt/ros/humble/share/nav2_bringup/worlds/waffle.model is used. Yet, I see that spawn_entity.py fails on missing lxml.
- Installed lxml (v5.2.2) with pip instal lxml, but that gives an empty world in Gazebo:
[spawn_entity.py-3] [INFO] [1716368313.041980781] [spawn_entity]: Loading entity XML from file /opt/ros/humble/share/nav2_bringup/worlds/waffle.model
[ERROR] [gzserver-1]: process has died [pid 290368, exit code 255, cmd 'gzserver -s libgazebo_ros_init.so -s libgazebo_ros_factory.so /opt/ros/humble/share/nav2_bringup/worlds/world_only.model'].
- Tried again with pip install lxml==4.9.4. No success. v3.8.0 could not be build. v4.8.0 was still empty world.
- Looked at Turtlebot quickstart.
- There they recommended sudo apt install ros-humble-gazebo-*, which installed 54 additional packages.
- Everything recommended is installed.
- Switching to simulation page. There they recommend to build the simulation from source. Had to do pip install catkin_pkg and deactive conda, but after that the turtlebot_ws was build. Also the burger in the empty world crashed.
- It seems that pip install lxml==4.6.3 is the latest version that can be installed. Still crashes.
-
- Back to RAE. Recharging helped, the robot started up.
- Went to RobotHub FrontEnd. There I was recommended to use the new FrontEnd at Hub
- There I see one device online (rae), running agent 22.223.1855. The RAE was stopped the default app (v1.2.1 - Nov 2023), which indicated that a new version was available. Didn't want to start. Changed to v2.0.0 - Jan 2024). Seems to want to install to 127.0.0.1, instead of the LAB42 ip. Front-End gives Download in Progress.
- Back to Get Started with RAE. Finishing up is going to the web-interface of the RAE itself https://:9010. I can select the 'Open Frontend', although that gives 'Waiting for stream' (Chrome and Edge browser). At the bottom right I see Session ID 'Disconnected', so it seems that it requires that the Default App is running. Actualy, I now see in the Perception Apps page two Default Apps which are starting, one of them is now running. According to the overview on the Luxonis hub, it is v2.0.0. The battery level was 47%, and when I connected the USB-c to USB-B cable the device is charging and streaming:
- The first assignment could be to calibrate the camera, because the perspective is quite distorted:
- Note that there is not much view of the ground nearby, so it is better to follow the lines at the ceiling than on the ground.
- Luxonis has Calibration guide
May 21, 2024
- Looked again to the RAE SDK python interface, including ROS interface. Not clear when it is updated, although they specify firmware OS 1.14, Agent 23.223.1855 and rae-ros:v0.2.2-humble. That versions are the same as in the discussion of Jan, 27 2024
- The robothub github is now updated to version 2.6.0 (few weeks old).
- The v2.5.0 update seems to be a major update, with support for cv2 for local development.
- In the blogs I see an update (March 18, 2024) that RobotHub has become a stable product (which a new app every month).
- Again (as Feb. 26), I don't see a yellow or blue light while charging.
- Also WLS Ubuntu has a number of frosen terminals, still running the TurtleBot navigation example. Time to go home.
May 17, 2024
- Finished the remainder of Chapter 6. The Q-learning seems less of interest for VAR.
- Looked again at my progress of April 24 with rerun.
- Some new examples were published, so started with git pull in ~/git/rerun at the Ubuntu 22.04 partition of WSL on nb-dual.
- Yet, this fails on fatal: Need to specify how to reconcile divergent branches
- Tried the suggested git config pull.rebase false, but that fails it wants to do a forced update origin/cmc/dense_chunks_1_client_side (and I was not logged in, so this push fails).
- After git config pull.rebase true the command git pull gives:
Successfully rebased and updated refs/heads/latest
- Continued with the ros_node example.
- Gazebo and rviz start up nicely, but nothing is visible in the rerun-screen. Seems that I have change to the latest version (0.16.0), which was released last night.
- Did that with git checkout tags/0.16.0. At that moment I was without branch, so I did git switch -, followed by a git pull.
- The viewer was still 0.15.1 (checked with about in upper-left corner), so looks I have to rebuild.
- For that I need pixi, which can be installed with curl -fsSL https://pixi.sh/install.sh | bash
- After source ~/.bashrc I tried the suggested pixi run py-build --release.
- Yet, this fails. Checked the cargo version, which was not installed.
- Installed cargo (with rust) by curl https://sh.rustup.rs -sSf | sh. Activated it by source ~/.cargo/env.
- Continued with pixi run rerun. That works, the viewer is now version 0.16.0.
- Also tried the suggested pixi run py-build --release. Again 351 packages are build. this ones fails on couldn't read crates/re_web_viewer_server/src/../web_viewer/re_viewer.js: No such file
- Indeed, those files two files are gone. Also cannot find them in the main version.
- Seems that the files are generated. The readme of this crate suggest to run pixi run rerun-build-web. Now those two files are there.
- Still, python3 uses the old viewer. Tried python3 -m pip install rerun-sdk==0.16.0. That was the trick to get it working:
- I am now as far as before (but with a webviewer), because I do not see an image from the Realsense.
- The ros2 launch nav2_bringup tb3_simulation_launch.py headless:=False gives many warnings that the transform from base_link to map doesn't exist.
- Explicitly defined this transform with ros2 run tf2_ros static_transform_publisher 0.1 0 0.2 0 0 0 map base_link. Now I got far more topics, including intel_realsense_r200_depth/image_raw:
/amcl/transition_event
/amcl_pose
/behavior_server/transition_event
/bond
/bt_navigator/transition_event
/clicked_point
/clock
/cmd_vel
/cmd_vel_nav
/cmd_vel_teleop
/controller_server/transition_event
/cost_cloud
/diagnostics
/downsampled_costmap
/downsampled_costmap_updates
/evaluation
/global_costmap/costmap
/global_costmap/costmap_raw
/global_costmap/costmap_updates
/global_costmap/footprint
/global_costmap/global_costmap/transition_event
/global_costmap/published_footprint
/global_costmap/voxel_marked_cloud
/goal_pose
/initialpose
/intel_realsense_r200_depth/camera_info
/intel_realsense_r200_depth/image_raw
/intel_realsense_r200_depth/points
/joint_states
/local_costmap/clearing_endpoints
/local_costmap/costmap
/local_costmap/costmap_raw
/local_costmap/costmap_updates
/local_costmap/footprint
/local_costmap/local_costmap/transition_event
/local_costmap/published_footprint
/local_costmap/voxel_grid
/local_costmap/voxel_marked_cloud
/local_plan
/map
/map_server/transition_event
/map_updates
/marker
/mobile_base/sensors/bumper_pointcloud
/odom
/parameter_events
/particle_cloud
/performance_metrics
/plan
/plan_smoothed
/planner_server/transition_event
/preempt_teleop
/received_global_plan
/robot_description
/rosout
/scan
/smoother_server/transition_event
/speed_limit
/tf
/tf_static
/transformed_global_plan
/velocity_smoother/transition_event
/waypoint_follower/transition_event
/waypoints
- Tried to visualise this simulated image with ros2 run image_view image_view --ros-args -r image:=/intel_realsense_r200_depth/image_raw, but nothing happens. Will try a restart of the simulator. Note that also the transform base_link to odom is missing.
- Also adding that transform, but now I get the warning [rviz]: Message Filter dropping message: frame 'odom' at time 0.000 for reason 'discarding message because the queue is full'
- The transform was complaining on depreciated arguments, so tried the new-style ros2 run tf2_ros static_transform_publisher --frame-id base_link --child-frame-id odom
- Could use some of the image_view alternatives mentioned in this blog.
- One missing transform was from base_footprint to map. After that the TF-tree in rviz2 looked OK, but I see no images with image_view.
- Seems that the CPU is a bit overwelmed, so lets try again doing this native (at home).
May 16, 2024
- Starting to read Chapter 6. The first problem looks very interesting: following a line with a ANN with a single hidden layer, to learn to follow a line (first Gazebo, then real).
- Continue with section 6.7
May 15, 2024
- Read A Standard Rigid Transformation Notation Convention, which not only analyse (including Corke), but is accompanied with github page.The WRT libary has both commandline, Python and C++ bindings.
- The proposed notation is summarized in this table:
- When used, the following sentence can be added:
"
In the following, the orientation of {s} with respect to {b} is denoted by $_b\mathbf{R}_s, and the position of {s} with respect to {b} and expressed in {c} is denoted by $^c_b\mathbf{p}_s$, as defined by the RIGID notation convention [link to this document ]
"
- I can also use \usepackage{rigidnotation}, so that I can us $\Rot{s}{b}$ and $\Pos{s}{b}{c}$ instead.
-
- Read chapter 5 (Vision and Algorithms). A lot is dedicated to 'old' CV, for instance the calibration section points to tools from 2000. The Gazebo exercises look interesting.
May 14, 2024
- Read Chapter 4 (Non-Visual Sensors and Algorithms). Not only Sonar and Lidar, and EKF / Particle SLAM are already covered.
April 24, 2024
- Looking if I can get rerun ros_node working on my Ubuntu 22.04 WSL on nb-dual.
- Rerun requires sudo apt install ros-humble-desktop gazebo ros-humble-navigation2 ros-humble-turtlebot3 ros-humble-turtlebot3-gazebo, so have to see if the visual parts works.
- No version of ros installed on the WSL Ubuntu 22.04, so first had to setup sources, following install instructions of ROS humble.
- Did a clone of rerun github.
- Checking if WSL allows graphical programs following WSL instructions. Gimp works, although the program gives warnings that the theme engine cannot find module_path: "pixmap".
- Running the TurtleBot examples works out of the box:
- Yet, not clear how scans and or observations should be streamed. Checked with ros2 topic list; the following topics are published:
/clicked_point
/clock
/downsampled_costmap
/downsampled_costmap_updates
/global_costmap/costmap
/global_costmap/costmap_updates
/global_costmap/voxel_marked_cloud
/initialpose
/joint_states
/local_costmap/costmap
/local_costmap/costmap_updates
/local_costmap/published_footprint
/local_costmap/voxel_marked_cloud
/local_plan
/map
/map_updates
/mobile_base/sensors/bumper_pointcloud
/parameter_events
/particle_cloud
/plan
/robot_description
/rosout
/scan
/tf
/tf_static
/waypoints
- Looked into python/ros_node/main.py. The ros_node is ready for a depth camera, yet this sensor is not mounted on the simulated TurtleBot3:
self.path_to_frame =
"map": "map",
"map/points": "camera_depth_frame",
"map/robot": "base_footprint",
"map/robot/scan": "base_scan",
"map/robot/camera": "camera_rgb_optical_frame",
"map/robot/camera/points": "camera_depth_frame",
- The depth camera is in principal a intel_realsense_r200_depth.
- Could look into this github repository to add the Intel Realsense to the TurtleBot.
- Yet, the turtlebot3_waffle is not in the ~/.gazebo/models-directory. Looking at the nav2 demo, it seems that the models is loaded from /opt/ros/humble/share/nav2_bringup/worlds/waffle.model
- Note that during nav2 bringup, I receive many warnings that frame "odom" doesn't exist.
- Checked that model. The realsense_depth_camera is already part of that model, which should load the driver libgazebo_ros_camera.so
- Seems that this can be installed sudo apt-get install ros-humble-gazebo-plugins, but that was already implictly done.
- This example shows an Kinect to be used in nav2. The depth camera seems to produce scans, but in the model also a LDS lidar is a mounted sensor, which also seems to produce scans.
April 23, 2024
- Scanned this EDUCON paper, which describes a @Home like course. Many links are given, except to course itself.
April 22, 2024
April 12, 2024
April 11, 2024
- Read the first two chapters from Dudek's 3rd edition. The history chapter feels a bit ancient, but I liked Chapter 2 were 'all' robot problems were demonstrated with a point-robot. Already in Chapter 2 an extensive list of exercise
March 6, 2024
- Looked at the first exercise, drawing a cube on the chessboard. The camera matrix K.txt is already given, so time to look into the calibration tools from Scaramuzza.
March 5, 2024
- Checking the Vision Algorithms for Mobile Robotics slides.
-
- From the 1st lecture - Introduction I really liked the example of Roomba 980's Visual SLAM to speed up the cleaning (slide 38).
- Next to Chapter 4, Szeliski 's book Computer Vision gets the most attention.
- For sure Chapter 11 - Structure from motion and SLAM and onwards are interesting.
- Szeliski's book is also used in Computer Vision 1 and Computer Vision 2.
- In the CV2 Overview an Project Evalualuation Criteria can be found.
- Structure from motion and SLAM is covered in the 3rd lecture of CV2. No explicit reference to Chapter 11 is made, only to the book itself (the 1st edition, 2010).
- Chapter 12 is Depth Estimation, which is not covered by CV2.
- Lecture 4 is covers Chapter 13 - 3D Reconstruction.
- The last chapter of Szeliski is Image-Based rendering, which is also not explicit covered in CV2.
- For this course Chapter 8 - Image Alignment and Stitching could cover BEV images, while Chapter 9 - Motion Estimation seems the most important. No idea if this is covered in CV1.
- Actually, BEV is not covered in Chapter 8. Not mentioned at all in the book.
- The the first lecture - Introduction ends with the learning goal: to build a full visual odometry pipeline (for a MARS rover).
- This article gives an overview of Open SLAM challenges (Table 2), and points back to an overview paper of 2021 and Scaramuzza's overview paper from 2016.
- Finishing the mini project gives a 0.5 grade bonus.
-
- Also checked 2nd lecture - image formation. This corresponds to Chapter 2.1 and 2.3 of Szeliski's book.
- The lab exercise is to implement a augmented reality wireframe cube. Can be done with a calibration board or the floor of our Intelligent Robotics Lab.
-
- Also checked 3rd lecture - camera calibration. This corresponds to Chapter 2.1 of Szeliski's book.
- The lab exercise is to implement a camera motion estimator.
- Camera localisation can be solved with 3 or 4 points. For 3 points the law of cosines is used.
- The 3 point algorithm can be found in OpenCV based on Gao's algorithm.
- Scaramuzza was author of A3P3P (also in OpenCV, which solves the camera pose (but not the distances from the points).
- The lecture ends with unwarping fish-eye and omnidirectional cameras; the bird eye view is also not covered.
- Would be nice to solve this lab exercise in ROS.
-
- Also checked 4th lecture - image filtering. This has not only the classic filter, including Canny edge. It is aslso shows that HED, a CNN based detector from 2015, outperforms Canny both in performance and computation time. Transformer models are now state-of-the-art. This corresponds to Chapter 3.2, 3.3 and 7.2.1 of Szeliski's book.
-
- Also checked 5th lecture - point feature detection and matching. This corresponds to Chapter 7.1 and 9.1 of Szeliski's book and Peter Corke's book (13.3 in 2nd edition - 12.3 in 3rd edition).
- The Lab exercise is to implement the Harris corner detector.
-
- The 6th lecture continues with point feature detection and matching. This corresponds also to Chapter 7.1 of Szeliski's book and Peter Corke's book(13.3 in 2nd edition - 12.3 in 3rd edition).
- The Lab exercise is to implement the SIFT blob detector.
-
- The 7th lecture starts with multi-view geometry. Recovering 3D structure from two images is seen as a simple form of 2-view geometry. With K,T,R known this is stereo vision. This corresponds to Chapter 12 of Szeliski's book and Peter Corke's book(14.3 in 2nd edition - 14.3 in 3rd edition). It would be nice to work with the RAE in a corridor with lines radiating from the epipole, while doing forward motion with the robot.
- The Lab exercise is stereo vision.
-
- The 8th lecture continues with multi-view geometry. Recovering 3D structure from two images while the K,T,R are unknown is seen as a simple form of 2-view structure from motion. This can only work if there are at least 5 correspondences, although that leaves open 10 distinct solutions. With the 8-point algorithm this is solved. This corresponds to Chapter 11.3 of Szeliski's book and Peter Corke's book(14.2 in 2nd edition - 14.2 in 3rd edition). This also means that here we have overlap wiht CV2. Scaramuzza order seems more logical than Szelizki's order.
- The Lab exercise is to implement the 8-point algorithm
-
- The 9th lecture continues with multi-view geometry with batches of images. So recovering 3D structure with more than two images. The redundancy can be used to remove outliers, because before perfect correspondence is assumed. Finally computer vision is applied to robotics, because the assumption is made that not every movement of the camera is possible. For a planar robot planar motion can be assumed (see Motion model of Thrun's book). The lecture ends with SuperGlue, which includes a Deep Front-End and Middle-End! No correspondence with the two textbooks are made.
- The Lab exercise is on Visual Odometry integration and the mini-projects.
-
- The 10th lecture finished multi-view geometry with the whole Visual Odometry pipeline, which combines 2D-to-2D motion estimation (Lecture 8), 3D-to-2D (Lecture 3) and 3D-to-3D (point cloud registration). SLAM algorithms such as SLAM++ are mentioned in the last step, in the Pose-Graph Optimization. Pose-Graph Optimization is less precise, but more efficient. Loop closure will be covered in Lecture 12, to upgrade Visual VO to Visual SLAM. No correspondence with the two textbooks are made.
- The Lab exercise is to combine the P3P algorithm with RANSAC.
-
- The 11nd lecture covers Tracking No correspondence with the two textbooks are made. Instead, they refer to Lucas Kanade 20 Years On: A Unifying Framework
- The Lab exercise is to implement the Kanade-Lucas-Tomasi tracker
-
- The 12th lecture consists of three parts. The first part covers Place Recognition (yet, Bag-of-Visual-Words is described, but not applied to loop-closure). No correspondence with the two textbooks are made. The second part covers Dense 3D Reconstruction (or multi-view stereo), where the Disparity Space Image is explained. This covers Chapter 12.7 of Szelinski's book. The third part starts very general with Deep Learning, but than focus on methods relevant for Computer Vision like NeRF.
-
- The 13th lecture covers Visual Inertial Fusion. The RAE also has an IMU (check - yes, a BMI270 6-axis IMU), so this is relevant for our course. The loosely coupled version of VIO should be easy to implement (although inaccurate and should not be used). The case study ROVIO is EKF based. No correspondence with the two textbooks are made.
- The Lab exercise is Bundle Adjustment.
-
- The last lecture is about Event-Based vision, which is not relevant for our course. The event camera is seen as an IMU on steriods, so they present the combitions of events, images and IMU as Ultimate SLAM?. No correspondence with the two textbooks are made.
- The Lab exercise is the final VO integration.
-
- Lets assume that CV1 coves Szeliski's Chapter 2 and 3 well enough. So, I should check Chapter 7 and beyond.
- Received an overview of CV1 from Shaodi. Not only Chapter 2 and 3 are covered. In the 3rd week motion and optical flow is covered (Chapter 7 and 9). The 4th week covers stitching (section 8.1, 8.2, 6.1 and 6.2). The 5th week section 5.2, 6.3 and 6.4 are covered. The 6th week section 5.3, 5.4, 12.1, 12.2, 12.3 and 13.2 are covered.
- This combines with CV2, which covers Chapter 11 and 13.
- So, only Chapter 10 (Computational photography) and Chapter 14 (Image-based rendering) are skipped.
-
- For Peter Corke's book, I should look at Chapter 12 till 14. On September 18, 2023 I read until Chapter 6. Chapter 7 and 8 are pure robotic. Chapter 9-11 are quite basic, although section 11.7 covers reshaping images, including image pyramids and warping. Bird-Eye view is also not covered in this book.
- Read section 11.7. Exercise 8b from this chapter is nice (write a covert pixel to latitude and longitude and overlay GPS data over a satellite image of your home).
- Another option is to modify exercise 9 (track moving vehicles) to the RAE in the lab (from a ceiling camera).
- Read Chapter 12 until 12.1.1. Nicely indicates that in robotics we can move the robot to a better viewpoint or add a diffucse light source near the camera to reduce effects like specular highlights. Next will be Object Instance Representation.
March 4, 2024
- Looking up the learning goals of the MscAI. In datanose I couldn't find learning trajectories in datanose, but found at least the course objectives from Computer Vision 1 and Computer Vision 2.
- Checked the TER 2023-2024. Objectives are split into two parts. First the general academic ones, after that the exit qualifications. Next to knowledge of the current theories and methods of AI and its subfields, also:
- has the capability to apply this knowledge to analyse, design and develop AI-systems;
- can formulate scientific questions and is able to solve problems with the aid of abstraction and
modelling;
- is able to contribute to further developments of the theories, methods and techniques of AI in a
scientific context;
- is able to express him/herself clearly on a technical/mathematical and general level;
- is aware of the social context and consequences of conducting AI research and work;
- can obtain an academic position at a university or research centre or scientific/applied position in
the industry.
- Looking into the two textbooks. Peter Corcke's book has the objectives in the preface:
- Give a cohesive narrative that covers robotics and computer vision - both seperately and together.
- Show how complex problems can be decomposed and solved.
- Allows the user to work with real problems, not just trivial examples.
- From Peter Corcke's introduction:
- The book will help to fill the gaps between what you learned as undergraduate, and what will be required to underpin you deeper study of robotics and computer vision.
- Another option is to work on the applications and look what knowledge is needed for a solution.
- Another option is to structure the course as done at the Robot Academy.
- From Dudek and Jenkin's book I have for the moment only the 2nd edition. The book is not very explicit on its objectives.
- These are the objectives we proposed to the Master - Upon successful completion, students will have the knowledge and skills to:
- Discuss the history, concepts and key components of robotics technologies.
- Describe and compare various robot sensors and their perception principles that enable a robot to analyze their environment, reason and take appropriate actions toward the given goal.
- Analyze and solve problems in spatial coordinate representation and spatial transformation, robot locomotion, kinematics, motion control, localization and mapping, navigation and path planning.
- Apply and demonstrate the learned knowledge and skills in practical robotics applications.
- Critically appraise current research literature and conduct experiments with state of the art robotic algorithms on a robotic platform.
- Effectively communicate engineering concepts and design decisions using a range of media.
-
- Looked at Zurich's Vision Algorithms for Mobile Robots course.
- It has several exercises, including those on Visual Odometry. It also has an optional Mini project on Visual Odometry.
- Note that this course is also based on Peter Corke's book. In addition, it points to Chapter 4 - Perception of "Autonomous Mobile Robots", by R. Siegwart, I.R. Nourbakhsh, D. Scaramuzza
- In addition, I see Zisserman's Multiple view Geometry, 2003.
- I also see the classic An Invitation to 3D Vision, by Y. Ma, S. Soatto, J. Kosecka, S.S. Sastry (Nov 2001)
- Last but not least Computer Vision: Algorithms and Applications, by Richard Szeliski
-
March 1, 2024
February 29, 2024
- Received the 3rd edition of Computational Principles of Mobile Robotics.
- In the preface they recommend to concentrate on Chapter 6-10. They are especially put focus on the Deep Learning chapter.
- They even have ROS2 compatible material with the book (Instructor resource).
- The book comes with exercises. The 2nd chapter directly has SLAM for a point-robot, including drive_to_goal.py code.
- For a high-level introduction to Deep Learning (in Chapter 6), they point to the book of Goodfellow, Bengio and Courville. As short introduction, they point to IEEE Spectrum article How Deep Learning works, which is mainly a visualisation of the backpropagation steps:
- Not much to find on the personal pages of Gregory Dudek and Michael R. M. Jenkin (not teaching this term)
- The last mobile robotics class on Mobile Robotics listed was Winter 2013
- Still, I like the level of the book.
February 28, 2024
- Looking into older forum post. This post was working on a test Perception App with python. The recommendation was to use the RAE Starter template (not the regular template) and to remove the RAE Default app.
- This post was working with the gazebo environment. The mentioned fix is 3 weeks ago integrated in the main Humble branch.
- In this post post they found that audio_spectrum.py example just demos the LCD screen, and that actual sound can be played via the ROS speakers_node.
- In this post Ivo was able to visualize the videostreams with rviz.
- Finally found the post I was looking for, with the charging check via cat /sys/class/power_supply/bq27441-0/status
- Executing this command just gave the answer Charging. capacity_level gave Normal. charge_now gave 886000, while charge_full gave 1510000, so that would mean that the battery level is now 58%.
- The frontend gave a battery level of 96%, so there is some mismatch there. Positive news is that I recieved a video stream!
- Could even control the motors with keyboard!
- Switched the RAE off by pushing the power button for 12 seconds (instead of the 8 seconds mentioned in the forum).
February 27, 2024
- According to a post of last Sunday, Luxonis is working on a RAE python SDK.
- This post describes more details to how to get the RAE python SDK working.
- This post reports that the battery drains in 34 minutes and that the Wifi has only 10% of the normal Wifi bandwidth (Luxonis is working on it).
- Found a factory reset procedure in this post.
- At the end they point to this post, which gives flashtool instructions.
- Yet, one is not specific how to install the flashtool, and the troubleshoot link is gone (also not on Wayback machine). I found the firmware page.
-
- Did a reboot by pushing the power and reset button at the same time. With dmesg | tail I see that there is a connection:
[499580.376346] usb 1-2: new high-speed USB device number 11 using xhci_hcd
[499580.524873] usb 1-2: New USB device found, idVendor=8087, idProduct=0b39, bcdDevice= 0.01
[499580.524876] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[499580.524877] usb 1-2: Product: Intel Movidius Keem Bay 3xxx
[499580.524877] usb 1-2: Manufacturer: Intel Corp.
- Nothing on the display. Didn't continue directly, not sure if the connection stayed (dmesg complains a lot on snap.firefox).
- At least ping 192.168.197.55 gives no response.
- Also ssh root@192.168.197.55 gives no response.
- The LED2 is also no longer blue, only the PGOOD, STATUS and BB-SYS.
-
- Found the flashtool via this post, which points to this document, which indicates to do python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ flashtool
- Checked artifactory, v2.9.0 is the latest version (Aug 2023).
Download the FIP and OS (luxonisos-1.14-rae.zip). Because 4 *.img are needed, I propably have to unzip it first. That is correct.
- After updating the udevadmin rules, the command flashtool flashall -d usb fip-dm3370-r5m2e5-1.8.1.bin data.img boot.img system.img syshash.img gives the output:
/home/arnoud/Install/Luxonis/fip-dm3370-r5m2e5-1.8.1.bin is a valid FIP
/home/arnoud/Install/Luxonis/data.img is a valid data image
/home/arnoud/Install/Luxonis/boot.img is a valid boot image
/home/arnoud/Install/Luxonis/system.img is a valid system image
/home/arnoud/Install/Luxonis/syshash.img is a valid verity image
No USB devices found. Please check your connection and ensure the device in recovery mode
That is strange, because dmesg is still OK, LED2 is still on.
- The USB-device also pops up when queried with lsusb:
Bus 001 Device 015: ID 8087:0b39 Intel Corp. Intel Movidius Keem Bay 3xxx
- Yet, when flashtool is run with in verbose (-v) I get:
cmd:./x86_64/fastboot devices -l, stdout:
matches []
USB device list: []
- Moved back to nb-dual (native Ubuntu 20.04), installed the flashtool and the luxonisos images.
- Asked a second hand to put the system in recovery mode. Now the flashtool ask for the USB to update the udevadm. After that, the flash starts:
IP flashed successfully for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 created GPT in 0.0160s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed data in 26.9830s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed boot_a in 16.9080s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed boot_b in 16.7980s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_a in 211.7190s
Successfully flashed device with MX ID: 58927016838860C5 on USB port: 3-4
- Note that the script is sensitive for KeyboardInterrupts, CTR-C seems to be ignored.
- Don't seem to make progress. Trying again.
- Now it fails after a short while:
FIP flashed successfully for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 created GPT in 0.0150s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed data in 28.2880s
- Tried again, this time (3x) it works until the end:
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_a in 208.3450s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed system_b in 311.7550s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed syshash_a in 5.6530s
3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5 flashed syshash_b in 5.5260s
OS flashing done for device 3-4 Intel Keembay USB(in OS recovery mode): 58927016838860C5. Please shutdown the system, set the boot switch to normal and reboot the system
Successfully flashed device with MX ID: 58927016838860C5 on USB port: 3-4
- Could also do ssh root@192.168.197.55. Note that lsusb gives no output, but dmesg now gives:
[ 7653.413423] usb 3-8: New USB device found, idVendor=1d6b, idProduct=0103, bcdDevice= 0.01
[ 7653.413433] usb 3-8: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 7653.413436] usb 3-8: Product: USB Ethernet
[ 7653.413439] usb 3-8: Manufacturer: Luxonis
[ 7653.413441] usb 3-8: SerialNumber: t00000002
[ 7653.436996] cdc_ncm 3-8:1.0: MAC-Address: 22:9c:a8:15:ad:c9
[ 7653.437369] cdc_ncm 3-8:1.0 usb0: register 'cdc_ncm' at usb-0000:00:14.0-8, CDC NCM, 22:9c:a8:15:ad:c9
[ 7654.037948] cdc_ncm 3-8:1.0 enx229ca815adc9: renamed from usb0
[ 7654.146394] IPv6: ADDRCONF(NETDEV_CHANGE): enx229ca815adc9: link becomes ready
- The RAE logo showed on the little screen, followed with the register at robothub screen. Pointed the RAE to our wifi. Screen gives a message ´Connected to Robothub'.
- After a short while the robot also appears in the list of RobotHub robots, and its ip can be seen:
-
- Note that Francisco Martin Rico is planning to update his book Robot Programming with ROS2.
-
- The documentation of robothub is also gone. The github is still there. This branch is updated last week, the feature/local_development branch was updated an hour ago.
-
- Next thing to do is to test the RAE python SDK.
- According to documentation, I should have Luxonis OS 1.14 (check) and Agent version 23.223.1855 (checked in RobotHub Overview).
- Next would be the correct image in the robotapp.toml. Logged in at the RAE, searched for a robotapp.toml in /home/robothub/data. This toml-file uses as runtime.runs_on ghrc.io/luxonis/robothub-app-v2:2023.218.1238-rvc3. The SDK mentions ghrc.io/luxonis/rae-ros:v0.2.2-humble.
- Ivo German mentioned in his post the StarterApp, with runs on luxonis/rae-ros-robot:dai_ros_py_base.
- In the mean time I am kicked out of the ssh-session and I see the RAE screen coming and going. Is the robot busy with an update?
- No App is assigned to the RAE robot. When I use the Install App, I have the choice between four Luxonis Apps: RAE Default, Streams, Emotion Recognition, Car counting App.
- Selected the RAE - Default app. From this app I have a choice between version 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 2.0.0.
- Because I am not connected, the App is not updated. The robot seems to reboot constantly. Already low on power?
- Disconnected the USB, which gives a constant RAE screen. In RobotHub it makes a connection, and starts to update the Agent version (without asking) from 23.233.1855 to 24.031.1223. RobotHub indicates that the upgrade is at 100% (that it is waiting for the device to connect, that the RAE is online, and that the upgrad takes longer than usual. Again RAE vanished a moment from the screen (reboot?). The overview indicated that the Install failed, and the agent version is still 23.223.1855. There is a upgrade button (OTA - over the air?)
- Looking for details the error is that the App is not build. RobotHub shows again that an Upgrade is in progress, the blue led is blinking, no connection to the wireless ip is possible.
- Changed the connected to the left USB-C directly (because Windows complained that the RAE required to much power).
- Now the RAE is visible again in RobotHub. Deselected the Default App 2.0.0 and tried version 1.2.1. Current status is "Download in progress" (Perception App - tab). Finally, it becomes Initializing and Running:
- Tried to run the front-face of the App. The Stream is connecting. According to the icon on the bottom the battery is 0% an the internet 10 Mb/s.
-
- From this post it seems that on Jan 25 Ivo was more or less at the same point as I am now.
February 26, 2024
- No yellow or blue light while charging with a USB-C.
- A simple ping 192.168.197.55 also gives no response.
-
- The black ground plate is epoxed to the white cover. With a scalpel we were able to lift the corners:
- You need a torch screwdriver for the four screws at the corner. We disconnected both connectors (battery and charging pad), which gives us full access to the inside:
- When applying the sequence of disconnecting, connecting, boot, power the green BB-Sys lights up:
- After a while the blue LED2 starts to blink, gets solid, whereafter all LEDs go out again:
- When connecting the USB-side to my laptop, the PGOOD led lights up green, together with an orange status LED:
- When only connected to USB, the orange status LED is blinking. No difference if I use the Dell adapter ring or USB-C to B connector, nor if I use the left USB.
- Could try if I have another 3.7V battery, to check if this one is broken.
-
- The battery has type JV8550105, which is a 3.7 V Battery with 5000mAh with a Molex-3 NTC connector.
- Couldn't find this type directly, this type has 3 connectors, as flat, but broader and shorter.
- The required width is 48mm, the length is 95mm without connectors, 105mm with connectors. Tickness is 8.5mm
- So the type numbers the first two indicates the thickness, the next two the width, the last three the length.
- Found such battery here, although the FTN-P8550105 has a higher capacity (6000 mAh)
February 23, 2024
- Formulated a question for the forum. As answer I got a picture of a open RAE. The screws should be in the corners, but are quite well hidden.
- I have the feeling that the screws are in the white cover, and that the black bottom plate is clipped.
- The LED at the side is now yellow, instead of blue. Blue should be on, yellow off but charging.
February 22, 2024
- Trying to startup the RAE, but still no response. The only response is a small blue light when I push the power button.
- Should try to charge via the charging pad, instead of the USB-C. Next option would be a factory reset, as described here.
- Trying to charge it via the charging PODs for a hour.
- Seems that I can check if it is charging by cat /sys/class/power_supply/bq27441-0/status (once I am logged in via ssh), according to this post
- Should also try the double press to switch it off, before starting it up again.
February 20, 2024
January 26, 2024
January 25, 2024
- For the shopping-list: RAE.
January 2, 2024
Previous Labbooks