Repository Summary
Checkout URI | https://github.com/IntelRealSense/realsense-ros.git |
VCS Type | git |
VCS Version | ros2-master |
Last Updated | 2024-09-02 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
realsense2_camera | 4.55.1 |
realsense2_camera_msgs | 4.55.1 |
realsense2_description | 4.55.1 |
README
ROS Wrapper for Intel(R) RealSense(TM) Cameras
Latest release notes
[![rolling][rolling-badge]][rolling] [![iron][iron-badge]][iron] [![humble][humble-badge]][humble] [![foxy][foxy-badge]][foxy] [![ubuntu22][ubuntu22-badge]][ubuntu22] [![ubuntu20][ubuntu20-badge]][ubuntu20]
Table of contents
ROS1 and ROS2 Legacy
Intel RealSense ROS1 Wrapper
Intel Realsense ROS1 Wrapper is not supported anymore, since our developers team are focusing on ROS2 distro.For ROS1 wrapper, go to ros1-legacy branch
Moving from ros2-legacy to ros2-master
* Changed Parameters: - **"stereo_module"**, **"l500_depth_sensor"** are replaced by **"depth_module"** - For video streams: **\Step 1: Install the ROS2 distribution
- #### Ubuntu 22.04: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Ubuntu-Install-Debians.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Ubuntu-Install-Debians.html) #### Ubuntu 20.04 - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Ubuntu-Install-Debians.html)Step 2: Install latest Intel® RealSense™ SDK 2.0
**Please choose only one option from the 3 options below (in order to prevent multiple versions installation and workspace conflicts)** - #### Option 1: Install librealsense2 debian package from Intel servers - Jetson users - use the [Jetson Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md) - Otherwise, install from [Linux Debian Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages) - In this case treat yourself as a developer: make sure to follow the instructions to also install librealsense2-dev and librealsense2-dkms packages - #### Option 2: Install librealsense2 (without graphical tools and examples) debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-Step 3: Install Intel® RealSense™ ROS2 wrapper
#### Option 1: Install debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-# Installation on Windows **PLEASE PAY ATTENTION: RealSense ROS2 Wrapper is not meant to be supported on Windows by our team, since ROS2 and its packages are still not fully supported over Windows. We added these installation steps below in order to try and make it easier for users who already started working with ROS2 on Windows and want to take advantage of the capabilities of our RealSense cameras**
Step 1: Install the ROS2 distribution
- #### Windows 10/11 **Please choose only one option from the two options below (in order to prevent multiple versions installation and workspace conflicts)** - Manual install from ROS2 formal documentation: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Windows-Install-Binary.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Windows-Install-Binary.html) - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Windows-Install-Binary.html) - Microsoft IOT binary installation: - https://ms-iot.github.io/ROSOnWindows/GettingStarted/SetupRos2.html - Pay attention that the examples of install are for Foxy distro (which is not supported anymore by RealSense ROS2 Wrapper) - Please replace the word "Foxy" with Humble or Iron, depends on the chosen distro.Step 2: Download RealSense™ ROS2 Wrapper and RealSense™ SDK 2.0 source code from github:
- Download Intel® RealSense™ ROS2 Wrapper source code from [Intel® RealSense™ ROS2 Wrapper Releases](https://github.com/IntelRealSense/realsense-ros/releases) - Download the corrosponding supported Intel® RealSense™ SDK 2.0 source code from the **"Supported RealSense SDK" section** of the specific release you chose fronm the link above - Place the librealsense folder inside the realsense-ros folder, to make the librealsense package set beside realsense2_camera, realsense2_camera_msgs and realsense2_description packagesStep 3: Build
1. Before starting building of our packages, make sure you have OpenCV for Windows installed on your machine. If you choose the Microsoft IOT way to install it, it will be installed automatically. Later, when colcon build, you might need to expose this installation folder by setting CMAKE_PREFIX_PATH, PATH, or OpenCV_DIR environment variables 2. Run "x64 Native Tools Command Prompt for VS 2019" as administrator 3. Setup ROS2 Environment (Do this for every new terminal/cmd you open): - If you choose the Microsoft IOT Binary option for installation ``` > C:\opt\ros\humble\x64\setup.bat ``` - If you choose the ROS2 formal documentation: ``` > call C:\dev\ros2_iron\local_setup.bat ``` 4. Change directory to realsense-ros folder ```bash > cd C:\ros2_ws\realsense-ros ``` 5. Build librealsense2 package only ```bash > colcon build --packages-select librealsense2 --cmake-args -DBUILD_EXAMPLES=OFF -DBUILD_WITH_STATIC_CRT=OFF -DBUILD_GRAPHICAL_EXAMPLES=OFF ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 6. Build the other packages ```bash > colcon build --packages-select realsense2_camera_msgs realsense2_description realsense2_camera ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 7. Setup environment with new installed packages (Do this for every new terminal/cmd you open): ```bash > call install\setup.bat ```# Usage ## Start the camera node #### with ros2 run: ros2 run realsense2_camera realsense2_camera_node # or, with parameters, for example - temporal and spatial filters are enabled: ros2 run realsense2_camera realsense2_camera_node --ros-args -p enable_color:=false -p spatial_filter.enable:=true -p temporal_filter.enable:=true #### with ros2 launch: ros2 launch realsense2_camera rs_launch.py ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true
## Camera Name And Camera Namespace User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with. ### Example - If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way. - For the first robot and first camera he will run/launch it with these parameters: - camera_namespace: - robot1 - camera_name - D455_1 - With ros2 launch (via command line or by editing these two parameters in the launch file): ```ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1 ``` - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/humble/How-To-Guides/Node-arguments.html)): ```ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1 ``` - Result ``` > ros2 node list /robot1/D455_1 > ros2 topic list /robot1/D455_1/color/camera_info /robot1/D455_1/color/image_raw /robot1/D455_1/color/metadata /robot1/D455_1/depth/camera_info /robot1/D455_1/depth/image_rect_raw /robot1/D455_1/depth/metadata /robot1/D455_1/extrinsics/depth_to_color /robot1/D455_1/imu > ros2 service list /robot1/D455_1/device_info ``` ### Default behavior if non of these parameters are given: - camera_namespace:=camera - camera_name:=camera ``` > ros2 node list /camera/camera > ros2 topic list /camera/camera/color/camera_info /camera/camera/color/image_raw /camera/camera/color/metadata /camera/camera/depth/camera_info /camera/camera/depth/image_rect_raw /camera/camera/depth/metadata /camera/camera/extrinsics/depth_to_color /camera/camera/imu > ros2 service list /camera/camera/device_info ```
## Parameters ### Available Parameters: - For the entire list of parameters type `ros2 param list`. - For reading a parameter value use `ros2 param get
## ROS2(Robot) vs Optical(Camera) Coordination Systems: - Point Of View: - Imagine we are standing behind of the camera, and looking forward. - Always use this point of view when talking about coordinates, left vs right IRs, position of sensor, etc.. ![image](https://user-images.githubusercontent.com/99127997/230150735-bc31fedf-d715-4e35-b462-fe2c338832c3.png) - ROS2 Coordinate System: (X: Forward, Y:Left, Z: Up) - Camera Optical Coordinate System: (X: Right, Y: Down, Z: Forward) - References: [REP-0103](https://www.ros.org/reps/rep-0103.html#coordinate-frame-conventions), [REP-0105](https://www.ros.org/reps/rep-0105.html#coordinate-frames) - All data published in our wrapper topics is optical data taken directly from our camera sensors. - static and dynamic TF topics publish optical CS and ROS CS to give the user the ability to move from one CS to other CS.
## TF from coordinate A to coordinate B: - TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) [Reference](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Transform.html) - In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame - Depth, left IR and "camera_link" coordinates converge together. - Our wrapper provide static TFs between each sensor coordinate to the camera base (camera_link) - Also, it provides TFs from each sensor ROS coordinates to its corrosponding optical coordinates. - Example of static TFs of RGB sensor and Infra2 (right infra) sensor of D435i module as it shown in rviz2: ![example](https://user-images.githubusercontent.com/99127997/230148106-0f79cbdb-c401-4d09-b386-a366af18e5f7.png)
## Extrinsics from sensor A to sensor B: - Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B. - Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B. - For example, depth_to_color, in D435i: - If we look from behind of the D435i, extrinsic from depth to color, means, where is the depth in relative to the color. - If we just look at the X coordinates, in the optical coordiantes (again, from behind) and assume that COLOR(RGB) sensor is (0,0,0), we can say that DEPTH sensor is on the right of RGB by 0.0148m (1.48cm). ![d435i](https://user-images.githubusercontent.com/99127997/230220297-e392f0fc-63bf-4bab-8001-af1ddf0ed00e.png) ``` administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color rotation: - 0.9999583959579468 - 0.008895332925021648 - -0.0020127370953559875 - -0.008895229548215866 - 0.9999604225158691 - 6.045500049367547e-05 - 0.0020131953060626984 - -4.254872692399658e-05 - 0.9999979734420776 translation: - 0.01485931035131216 - 0.0010161789832636714 - 0.0005317096947692335 --- ``` - Extrinsic msg is made up of two parts: - float64[9] rotation (Column - major 3x3 rotation matrix) - float64[3] translation (Three-element translation vector, in meters)
## Published Topics The published topics differ according to the device and parameters. After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `ros2 topic list`): - /camera/camera/aligned_depth_to_color/camera_info - /camera/camera/aligned_depth_to_color/image_raw - /camera/camera/color/camera_info - /camera/camera/color/image_raw - /camera/camera/color/metadata - /camera/camera/depth/camera_info - /camera/camera/depth/color/points - /camera/camera/depth/image_rect_raw - /camera/camera/depth/metadata - /camera/camera/extrinsics/depth_to_color - /camera/camera/imu - /diagnostics - /parameter_events - /rosout - /tf_static This will stream relevant camera sensors and publish on the appropriate ROS topics. Enabling accel and gyro is achieved either by adding the following parameters to the command line:</br> `ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true enable_gyro:=true enable_accel:=true` </br> or in runtime using the following commands: ``` ros2 param set /camera/camera enable_accel true ros2 param set /camera/camera enable_gyro true ``` Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics: - /camera/camera/accel/imu_info - /camera/camera/accel/metadata - /camera/camera/accel/sample - /camera/camera/extrinsics/depth_to_accel - /camera/camera/extrinsics/depth_to_gyro - /camera/camera/gyro/imu_info - /camera/camera/gyro/metadata - /camera/camera/gyro/sample
## RGBD Topic RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag. These boolean paramters should be true to enable rgbd messages: - `enable_rgbd`: new paramter, to enable/disable rgbd topic, changeable during runtime - `align_depth.enable`: align depth images to rgb images - `enable_sync`: let librealsense sync between frames, and get the frameset with color and depth images combined - `enable_color` + `enable_depth`: enable both color and depth sensors The current QoS of the topic itself, is the same as Depth and Color streams (SYSTEM_DEFAULT) Example: ``` ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true ```
## Metadata topic The metadata messages store the camera's available metadata in a *json* format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic: ``` python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata ```
## Post-Processing Filters The following post processing filters are available: - ```align_depth ```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/camera/aligned_depth_to_color/image_raw`. - The pointcloud, if created, will be based on the aligned depth image. - ```colorizer ```: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values . - ```pointcloud ```: will add a pointcloud topic `/camera/camera/depth/color/points`. * The texture of the pointcloud can be modified using the `pointcloud.stream_filter` parameter.</br> * The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting `pointcloud.allow_no_texture_points` to true. * pointcloud is of an unordered format by default. This can be changed by setting `pointcloud.ordered_pc` to true. * The QoS of the pointcloud topic is independent from depth and color streams and can be controlled with the `pointcloud.pointcloud_qos` parameter. - The same set of QoS values are supported as other streams, refer to
## Available services - device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
## Efficient intra-process communication: Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS. You will need to launch a component container and launch our node as a component together with other component nodes. Further details on "Composing multiple nodes in a single process" can be found [here](https://docs.ros.org/en/rolling/Tutorials/Composition.html). Further details on efficient intra-process communication can be found [here](https://docs.ros.org/en/humble/Tutorials/Intra-Process-Communication.html#efficient-intra-process-communication). ### Example #### Manually loading multiple components into the same process * Start the component: ```bash ros2 run rclcpp_components component_container ``` * Add the wrapper: ```bash ros2 component load /ComponentManager realsense2_camera realsense2_camera::RealSenseNodeFactory -e use_intra_process_comms:=true ``` Load other component nodes (consumers of the wrapper topics) in the same way. ### Limitations * Node components are currently not supported on RCLPY * Compressed images using `image_transport` will be disabled as this isn't supported with intra-process communication ### Latency test tool and launch file For getting a sense of the latency reduction, a frame latency reporter tool is available via a launch file. The launch file loads the wrapper and a frame latency reporter tool component into a single container (so the same process). The tool prints out the frame latency (`now - frame.timestamp`) per frame. The tool is not built unless asked for. Turn on `BUILD_TOOLS` during build to have it available: ```bash colcon build --cmake-args '-DBUILD_TOOLS=ON' ``` The launch file accepts a parameter, `intra_process_comms`, controlling whether zero-copy is turned on or not. Default is on: ```bash ros2 launch realsense2_camera rs_intra_process_demo_launch.py intra_process_comms:=true ``` </details> [rolling-badge]: https://img.shields.io/badge/-ROLLING-orange?style=flat-square&logo=ros [rolling]: https://docs.ros.org/en/rolling/index.html [foxy-badge]: https://img.shields.io/badge/-foxy-orange?style=flat-square&logo=ros [foxy]: https://docs.ros.org/en/foxy/index.html [humble-badge]: https://img.shields.io/badge/-HUMBLE-orange?style=flat-square&logo=ros [humble]: https://docs.ros.org/en/humble/index.html [iron-badge]: https://img.shields.io/badge/-IRON-orange?style=flat-square&logo=ros [iron]: https://docs.ros.org/en/iron/index.html [ubuntu22-badge]: https://img.shields.io/badge/-UBUNTU%2022%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu22]: https://releases.ubuntu.com/jammy/ [ubuntu20-badge]: https://img.shields.io/badge/-UBUNTU%2020%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu20]: https://releases.ubuntu.com/focal/
CONTRIBUTING
How to Contribute
This project welcomes third-party code via GitHub pull requests.
You are welcome to propose and discuss enhancements using project issues.
Branching Policy: The
ros2-master
branch is considered stable, at all times. If you plan to propose a patch, please commit into theros2-development
branch, or its own feature branch.
In addition, please run pr_check.sh
under scripts
directory. This scripts verify compliance with project’s standards:
- Every example / source file must refer to LICENSE
- Every example / source file must include correct copyright notice
- For indentation we are using spaces and not tabs
- Line-endings must be Unix and not DOS style
Most common issues can be automatically resolved by running ./pr_check.sh --fix
Please familirize yourself with the Apache License 2.0 before contributing.
Step-by-Step
- Make sure you have
git
andcmake
installed on your system. On Windows we recommend using Git Extensions for git bash. - Run
git clone https://github.com/IntelRealSense/realsense-ros.git
andcd realsense-ros
- To align with latest status of the ros2-development branch, run:
git fetch origin
git checkout ros2-development
git reset --hard origin/ros2-development
-
git checkout -b name_of_your_contribution
to create a dedicated branch - Make your changes to the local repository
- Make sure your local git user is updated, or run
git config --global user.email "email@example.com"
andgit config --global user.user "user"
to set it up. This is the user & email that will appear in GitHub history. -
git add -p
to select the changes you wish to add git commit -m "Description of the change"
- Make sure you have a GitHub user and fork realsense-ros
-
git remote add fork https://github.com/username/realsense-ros.git
with your GitHubusername
git fetch fork
-
git push fork
to pushname_of_your_contribution
branch to your fork - Go to your fork on GitHub at
https://github.com/username/realsense-ros
- Click the
New pull request
button - For
base
combo-box selectros2-development
, since you want to submit a PR to that branch - For
compare
combo-box selectname_of_your_contribution
with your commit - Review your changes and click
Create pull request
- Wait for all automated checks to pass
- The PR will be approved / rejected after review from the team and the community
To continue to new change, goto step 3. To return to your PR (in order to make more changes):
git stash
git checkout name_of_your_contribution
- Repeat items 5-8 from the previous list
-
git push fork
The pull request will be automatically updated
Repository Summary
Checkout URI | https://github.com/IntelRealSense/realsense-ros.git |
VCS Type | git |
VCS Version | ros2-master |
Last Updated | 2024-09-02 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
realsense2_camera | 4.55.1 |
realsense2_camera_msgs | 4.55.1 |
realsense2_description | 4.55.1 |
README
ROS Wrapper for Intel(R) RealSense(TM) Cameras
Latest release notes
[![rolling][rolling-badge]][rolling] [![iron][iron-badge]][iron] [![humble][humble-badge]][humble] [![foxy][foxy-badge]][foxy] [![ubuntu22][ubuntu22-badge]][ubuntu22] [![ubuntu20][ubuntu20-badge]][ubuntu20]
Table of contents
ROS1 and ROS2 Legacy
Intel RealSense ROS1 Wrapper
Intel Realsense ROS1 Wrapper is not supported anymore, since our developers team are focusing on ROS2 distro.For ROS1 wrapper, go to ros1-legacy branch
Moving from ros2-legacy to ros2-master
* Changed Parameters: - **"stereo_module"**, **"l500_depth_sensor"** are replaced by **"depth_module"** - For video streams: **\Step 1: Install the ROS2 distribution
- #### Ubuntu 22.04: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Ubuntu-Install-Debians.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Ubuntu-Install-Debians.html) #### Ubuntu 20.04 - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Ubuntu-Install-Debians.html)Step 2: Install latest Intel® RealSense™ SDK 2.0
**Please choose only one option from the 3 options below (in order to prevent multiple versions installation and workspace conflicts)** - #### Option 1: Install librealsense2 debian package from Intel servers - Jetson users - use the [Jetson Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md) - Otherwise, install from [Linux Debian Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages) - In this case treat yourself as a developer: make sure to follow the instructions to also install librealsense2-dev and librealsense2-dkms packages - #### Option 2: Install librealsense2 (without graphical tools and examples) debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-Step 3: Install Intel® RealSense™ ROS2 wrapper
#### Option 1: Install debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-# Installation on Windows **PLEASE PAY ATTENTION: RealSense ROS2 Wrapper is not meant to be supported on Windows by our team, since ROS2 and its packages are still not fully supported over Windows. We added these installation steps below in order to try and make it easier for users who already started working with ROS2 on Windows and want to take advantage of the capabilities of our RealSense cameras**
Step 1: Install the ROS2 distribution
- #### Windows 10/11 **Please choose only one option from the two options below (in order to prevent multiple versions installation and workspace conflicts)** - Manual install from ROS2 formal documentation: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Windows-Install-Binary.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Windows-Install-Binary.html) - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Windows-Install-Binary.html) - Microsoft IOT binary installation: - https://ms-iot.github.io/ROSOnWindows/GettingStarted/SetupRos2.html - Pay attention that the examples of install are for Foxy distro (which is not supported anymore by RealSense ROS2 Wrapper) - Please replace the word "Foxy" with Humble or Iron, depends on the chosen distro.Step 2: Download RealSense™ ROS2 Wrapper and RealSense™ SDK 2.0 source code from github:
- Download Intel® RealSense™ ROS2 Wrapper source code from [Intel® RealSense™ ROS2 Wrapper Releases](https://github.com/IntelRealSense/realsense-ros/releases) - Download the corrosponding supported Intel® RealSense™ SDK 2.0 source code from the **"Supported RealSense SDK" section** of the specific release you chose fronm the link above - Place the librealsense folder inside the realsense-ros folder, to make the librealsense package set beside realsense2_camera, realsense2_camera_msgs and realsense2_description packagesStep 3: Build
1. Before starting building of our packages, make sure you have OpenCV for Windows installed on your machine. If you choose the Microsoft IOT way to install it, it will be installed automatically. Later, when colcon build, you might need to expose this installation folder by setting CMAKE_PREFIX_PATH, PATH, or OpenCV_DIR environment variables 2. Run "x64 Native Tools Command Prompt for VS 2019" as administrator 3. Setup ROS2 Environment (Do this for every new terminal/cmd you open): - If you choose the Microsoft IOT Binary option for installation ``` > C:\opt\ros\humble\x64\setup.bat ``` - If you choose the ROS2 formal documentation: ``` > call C:\dev\ros2_iron\local_setup.bat ``` 4. Change directory to realsense-ros folder ```bash > cd C:\ros2_ws\realsense-ros ``` 5. Build librealsense2 package only ```bash > colcon build --packages-select librealsense2 --cmake-args -DBUILD_EXAMPLES=OFF -DBUILD_WITH_STATIC_CRT=OFF -DBUILD_GRAPHICAL_EXAMPLES=OFF ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 6. Build the other packages ```bash > colcon build --packages-select realsense2_camera_msgs realsense2_description realsense2_camera ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 7. Setup environment with new installed packages (Do this for every new terminal/cmd you open): ```bash > call install\setup.bat ```# Usage ## Start the camera node #### with ros2 run: ros2 run realsense2_camera realsense2_camera_node # or, with parameters, for example - temporal and spatial filters are enabled: ros2 run realsense2_camera realsense2_camera_node --ros-args -p enable_color:=false -p spatial_filter.enable:=true -p temporal_filter.enable:=true #### with ros2 launch: ros2 launch realsense2_camera rs_launch.py ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true
## Camera Name And Camera Namespace User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with. ### Example - If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way. - For the first robot and first camera he will run/launch it with these parameters: - camera_namespace: - robot1 - camera_name - D455_1 - With ros2 launch (via command line or by editing these two parameters in the launch file): ```ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1 ``` - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/humble/How-To-Guides/Node-arguments.html)): ```ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1 ``` - Result ``` > ros2 node list /robot1/D455_1 > ros2 topic list /robot1/D455_1/color/camera_info /robot1/D455_1/color/image_raw /robot1/D455_1/color/metadata /robot1/D455_1/depth/camera_info /robot1/D455_1/depth/image_rect_raw /robot1/D455_1/depth/metadata /robot1/D455_1/extrinsics/depth_to_color /robot1/D455_1/imu > ros2 service list /robot1/D455_1/device_info ``` ### Default behavior if non of these parameters are given: - camera_namespace:=camera - camera_name:=camera ``` > ros2 node list /camera/camera > ros2 topic list /camera/camera/color/camera_info /camera/camera/color/image_raw /camera/camera/color/metadata /camera/camera/depth/camera_info /camera/camera/depth/image_rect_raw /camera/camera/depth/metadata /camera/camera/extrinsics/depth_to_color /camera/camera/imu > ros2 service list /camera/camera/device_info ```
## Parameters ### Available Parameters: - For the entire list of parameters type `ros2 param list`. - For reading a parameter value use `ros2 param get
## ROS2(Robot) vs Optical(Camera) Coordination Systems: - Point Of View: - Imagine we are standing behind of the camera, and looking forward. - Always use this point of view when talking about coordinates, left vs right IRs, position of sensor, etc.. ![image](https://user-images.githubusercontent.com/99127997/230150735-bc31fedf-d715-4e35-b462-fe2c338832c3.png) - ROS2 Coordinate System: (X: Forward, Y:Left, Z: Up) - Camera Optical Coordinate System: (X: Right, Y: Down, Z: Forward) - References: [REP-0103](https://www.ros.org/reps/rep-0103.html#coordinate-frame-conventions), [REP-0105](https://www.ros.org/reps/rep-0105.html#coordinate-frames) - All data published in our wrapper topics is optical data taken directly from our camera sensors. - static and dynamic TF topics publish optical CS and ROS CS to give the user the ability to move from one CS to other CS.
## TF from coordinate A to coordinate B: - TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) [Reference](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Transform.html) - In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame - Depth, left IR and "camera_link" coordinates converge together. - Our wrapper provide static TFs between each sensor coordinate to the camera base (camera_link) - Also, it provides TFs from each sensor ROS coordinates to its corrosponding optical coordinates. - Example of static TFs of RGB sensor and Infra2 (right infra) sensor of D435i module as it shown in rviz2: ![example](https://user-images.githubusercontent.com/99127997/230148106-0f79cbdb-c401-4d09-b386-a366af18e5f7.png)
## Extrinsics from sensor A to sensor B: - Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B. - Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B. - For example, depth_to_color, in D435i: - If we look from behind of the D435i, extrinsic from depth to color, means, where is the depth in relative to the color. - If we just look at the X coordinates, in the optical coordiantes (again, from behind) and assume that COLOR(RGB) sensor is (0,0,0), we can say that DEPTH sensor is on the right of RGB by 0.0148m (1.48cm). ![d435i](https://user-images.githubusercontent.com/99127997/230220297-e392f0fc-63bf-4bab-8001-af1ddf0ed00e.png) ``` administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color rotation: - 0.9999583959579468 - 0.008895332925021648 - -0.0020127370953559875 - -0.008895229548215866 - 0.9999604225158691 - 6.045500049367547e-05 - 0.0020131953060626984 - -4.254872692399658e-05 - 0.9999979734420776 translation: - 0.01485931035131216 - 0.0010161789832636714 - 0.0005317096947692335 --- ``` - Extrinsic msg is made up of two parts: - float64[9] rotation (Column - major 3x3 rotation matrix) - float64[3] translation (Three-element translation vector, in meters)
## Published Topics The published topics differ according to the device and parameters. After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `ros2 topic list`): - /camera/camera/aligned_depth_to_color/camera_info - /camera/camera/aligned_depth_to_color/image_raw - /camera/camera/color/camera_info - /camera/camera/color/image_raw - /camera/camera/color/metadata - /camera/camera/depth/camera_info - /camera/camera/depth/color/points - /camera/camera/depth/image_rect_raw - /camera/camera/depth/metadata - /camera/camera/extrinsics/depth_to_color - /camera/camera/imu - /diagnostics - /parameter_events - /rosout - /tf_static This will stream relevant camera sensors and publish on the appropriate ROS topics. Enabling accel and gyro is achieved either by adding the following parameters to the command line:</br> `ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true enable_gyro:=true enable_accel:=true` </br> or in runtime using the following commands: ``` ros2 param set /camera/camera enable_accel true ros2 param set /camera/camera enable_gyro true ``` Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics: - /camera/camera/accel/imu_info - /camera/camera/accel/metadata - /camera/camera/accel/sample - /camera/camera/extrinsics/depth_to_accel - /camera/camera/extrinsics/depth_to_gyro - /camera/camera/gyro/imu_info - /camera/camera/gyro/metadata - /camera/camera/gyro/sample
## RGBD Topic RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag. These boolean paramters should be true to enable rgbd messages: - `enable_rgbd`: new paramter, to enable/disable rgbd topic, changeable during runtime - `align_depth.enable`: align depth images to rgb images - `enable_sync`: let librealsense sync between frames, and get the frameset with color and depth images combined - `enable_color` + `enable_depth`: enable both color and depth sensors The current QoS of the topic itself, is the same as Depth and Color streams (SYSTEM_DEFAULT) Example: ``` ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true ```
## Metadata topic The metadata messages store the camera's available metadata in a *json* format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic: ``` python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata ```
## Post-Processing Filters The following post processing filters are available: - ```align_depth ```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/camera/aligned_depth_to_color/image_raw`. - The pointcloud, if created, will be based on the aligned depth image. - ```colorizer ```: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values . - ```pointcloud ```: will add a pointcloud topic `/camera/camera/depth/color/points`. * The texture of the pointcloud can be modified using the `pointcloud.stream_filter` parameter.</br> * The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting `pointcloud.allow_no_texture_points` to true. * pointcloud is of an unordered format by default. This can be changed by setting `pointcloud.ordered_pc` to true. * The QoS of the pointcloud topic is independent from depth and color streams and can be controlled with the `pointcloud.pointcloud_qos` parameter. - The same set of QoS values are supported as other streams, refer to
## Available services - device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
## Efficient intra-process communication: Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS. You will need to launch a component container and launch our node as a component together with other component nodes. Further details on "Composing multiple nodes in a single process" can be found [here](https://docs.ros.org/en/rolling/Tutorials/Composition.html). Further details on efficient intra-process communication can be found [here](https://docs.ros.org/en/humble/Tutorials/Intra-Process-Communication.html#efficient-intra-process-communication). ### Example #### Manually loading multiple components into the same process * Start the component: ```bash ros2 run rclcpp_components component_container ``` * Add the wrapper: ```bash ros2 component load /ComponentManager realsense2_camera realsense2_camera::RealSenseNodeFactory -e use_intra_process_comms:=true ``` Load other component nodes (consumers of the wrapper topics) in the same way. ### Limitations * Node components are currently not supported on RCLPY * Compressed images using `image_transport` will be disabled as this isn't supported with intra-process communication ### Latency test tool and launch file For getting a sense of the latency reduction, a frame latency reporter tool is available via a launch file. The launch file loads the wrapper and a frame latency reporter tool component into a single container (so the same process). The tool prints out the frame latency (`now - frame.timestamp`) per frame. The tool is not built unless asked for. Turn on `BUILD_TOOLS` during build to have it available: ```bash colcon build --cmake-args '-DBUILD_TOOLS=ON' ``` The launch file accepts a parameter, `intra_process_comms`, controlling whether zero-copy is turned on or not. Default is on: ```bash ros2 launch realsense2_camera rs_intra_process_demo_launch.py intra_process_comms:=true ``` </details> [rolling-badge]: https://img.shields.io/badge/-ROLLING-orange?style=flat-square&logo=ros [rolling]: https://docs.ros.org/en/rolling/index.html [foxy-badge]: https://img.shields.io/badge/-foxy-orange?style=flat-square&logo=ros [foxy]: https://docs.ros.org/en/foxy/index.html [humble-badge]: https://img.shields.io/badge/-HUMBLE-orange?style=flat-square&logo=ros [humble]: https://docs.ros.org/en/humble/index.html [iron-badge]: https://img.shields.io/badge/-IRON-orange?style=flat-square&logo=ros [iron]: https://docs.ros.org/en/iron/index.html [ubuntu22-badge]: https://img.shields.io/badge/-UBUNTU%2022%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu22]: https://releases.ubuntu.com/jammy/ [ubuntu20-badge]: https://img.shields.io/badge/-UBUNTU%2020%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu20]: https://releases.ubuntu.com/focal/
CONTRIBUTING
How to Contribute
This project welcomes third-party code via GitHub pull requests.
You are welcome to propose and discuss enhancements using project issues.
Branching Policy: The
ros2-master
branch is considered stable, at all times. If you plan to propose a patch, please commit into theros2-development
branch, or its own feature branch.
In addition, please run pr_check.sh
under scripts
directory. This scripts verify compliance with project’s standards:
- Every example / source file must refer to LICENSE
- Every example / source file must include correct copyright notice
- For indentation we are using spaces and not tabs
- Line-endings must be Unix and not DOS style
Most common issues can be automatically resolved by running ./pr_check.sh --fix
Please familirize yourself with the Apache License 2.0 before contributing.
Step-by-Step
- Make sure you have
git
andcmake
installed on your system. On Windows we recommend using Git Extensions for git bash. - Run
git clone https://github.com/IntelRealSense/realsense-ros.git
andcd realsense-ros
- To align with latest status of the ros2-development branch, run:
git fetch origin
git checkout ros2-development
git reset --hard origin/ros2-development
-
git checkout -b name_of_your_contribution
to create a dedicated branch - Make your changes to the local repository
- Make sure your local git user is updated, or run
git config --global user.email "email@example.com"
andgit config --global user.user "user"
to set it up. This is the user & email that will appear in GitHub history. -
git add -p
to select the changes you wish to add git commit -m "Description of the change"
- Make sure you have a GitHub user and fork realsense-ros
-
git remote add fork https://github.com/username/realsense-ros.git
with your GitHubusername
git fetch fork
-
git push fork
to pushname_of_your_contribution
branch to your fork - Go to your fork on GitHub at
https://github.com/username/realsense-ros
- Click the
New pull request
button - For
base
combo-box selectros2-development
, since you want to submit a PR to that branch - For
compare
combo-box selectname_of_your_contribution
with your commit - Review your changes and click
Create pull request
- Wait for all automated checks to pass
- The PR will be approved / rejected after review from the team and the community
To continue to new change, goto step 3. To return to your PR (in order to make more changes):
git stash
git checkout name_of_your_contribution
- Repeat items 5-8 from the previous list
-
git push fork
The pull request will be automatically updated
Repository Summary
Checkout URI | https://github.com/IntelRealSense/realsense-ros.git |
VCS Type | git |
VCS Version | ros2-master |
Last Updated | 2024-09-02 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
realsense2_camera | 4.55.1 |
realsense2_camera_msgs | 4.55.1 |
realsense2_description | 4.55.1 |
README
ROS Wrapper for Intel(R) RealSense(TM) Cameras
Latest release notes
[![rolling][rolling-badge]][rolling] [![iron][iron-badge]][iron] [![humble][humble-badge]][humble] [![foxy][foxy-badge]][foxy] [![ubuntu22][ubuntu22-badge]][ubuntu22] [![ubuntu20][ubuntu20-badge]][ubuntu20]
Table of contents
ROS1 and ROS2 Legacy
Intel RealSense ROS1 Wrapper
Intel Realsense ROS1 Wrapper is not supported anymore, since our developers team are focusing on ROS2 distro.For ROS1 wrapper, go to ros1-legacy branch
Moving from ros2-legacy to ros2-master
* Changed Parameters: - **"stereo_module"**, **"l500_depth_sensor"** are replaced by **"depth_module"** - For video streams: **\Step 1: Install the ROS2 distribution
- #### Ubuntu 22.04: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Ubuntu-Install-Debians.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Ubuntu-Install-Debians.html) #### Ubuntu 20.04 - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Ubuntu-Install-Debians.html)Step 2: Install latest Intel® RealSense™ SDK 2.0
**Please choose only one option from the 3 options below (in order to prevent multiple versions installation and workspace conflicts)** - #### Option 1: Install librealsense2 debian package from Intel servers - Jetson users - use the [Jetson Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md) - Otherwise, install from [Linux Debian Installation Guide](https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages) - In this case treat yourself as a developer: make sure to follow the instructions to also install librealsense2-dev and librealsense2-dkms packages - #### Option 2: Install librealsense2 (without graphical tools and examples) debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-Step 3: Install Intel® RealSense™ ROS2 wrapper
#### Option 1: Install debian package from ROS servers (Foxy EOL distro is not supported by this option): - [Configure](http://wiki.ros.org/Installation/Ubuntu/Sources) your Ubuntu repositories - Install all realsense ROS packages by ```sudo apt install ros-# Installation on Windows **PLEASE PAY ATTENTION: RealSense ROS2 Wrapper is not meant to be supported on Windows by our team, since ROS2 and its packages are still not fully supported over Windows. We added these installation steps below in order to try and make it easier for users who already started working with ROS2 on Windows and want to take advantage of the capabilities of our RealSense cameras**
Step 1: Install the ROS2 distribution
- #### Windows 10/11 **Please choose only one option from the two options below (in order to prevent multiple versions installation and workspace conflicts)** - Manual install from ROS2 formal documentation: - [ROS2 Iron](https://docs.ros.org/en/iron/Installation/Windows-Install-Binary.html) - [ROS2 Humble](https://docs.ros.org/en/humble/Installation/Windows-Install-Binary.html) - [ROS2 Foxy](https://docs.ros.org/en/foxy/Installation/Windows-Install-Binary.html) - Microsoft IOT binary installation: - https://ms-iot.github.io/ROSOnWindows/GettingStarted/SetupRos2.html - Pay attention that the examples of install are for Foxy distro (which is not supported anymore by RealSense ROS2 Wrapper) - Please replace the word "Foxy" with Humble or Iron, depends on the chosen distro.Step 2: Download RealSense™ ROS2 Wrapper and RealSense™ SDK 2.0 source code from github:
- Download Intel® RealSense™ ROS2 Wrapper source code from [Intel® RealSense™ ROS2 Wrapper Releases](https://github.com/IntelRealSense/realsense-ros/releases) - Download the corrosponding supported Intel® RealSense™ SDK 2.0 source code from the **"Supported RealSense SDK" section** of the specific release you chose fronm the link above - Place the librealsense folder inside the realsense-ros folder, to make the librealsense package set beside realsense2_camera, realsense2_camera_msgs and realsense2_description packagesStep 3: Build
1. Before starting building of our packages, make sure you have OpenCV for Windows installed on your machine. If you choose the Microsoft IOT way to install it, it will be installed automatically. Later, when colcon build, you might need to expose this installation folder by setting CMAKE_PREFIX_PATH, PATH, or OpenCV_DIR environment variables 2. Run "x64 Native Tools Command Prompt for VS 2019" as administrator 3. Setup ROS2 Environment (Do this for every new terminal/cmd you open): - If you choose the Microsoft IOT Binary option for installation ``` > C:\opt\ros\humble\x64\setup.bat ``` - If you choose the ROS2 formal documentation: ``` > call C:\dev\ros2_iron\local_setup.bat ``` 4. Change directory to realsense-ros folder ```bash > cd C:\ros2_ws\realsense-ros ``` 5. Build librealsense2 package only ```bash > colcon build --packages-select librealsense2 --cmake-args -DBUILD_EXAMPLES=OFF -DBUILD_WITH_STATIC_CRT=OFF -DBUILD_GRAPHICAL_EXAMPLES=OFF ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 6. Build the other packages ```bash > colcon build --packages-select realsense2_camera_msgs realsense2_description realsense2_camera ``` - User can add `--event-handlers console_direct+` parameter to see more debug outputs of the colcon build 7. Setup environment with new installed packages (Do this for every new terminal/cmd you open): ```bash > call install\setup.bat ```# Usage ## Start the camera node #### with ros2 run: ros2 run realsense2_camera realsense2_camera_node # or, with parameters, for example - temporal and spatial filters are enabled: ros2 run realsense2_camera realsense2_camera_node --ros-args -p enable_color:=false -p spatial_filter.enable:=true -p temporal_filter.enable:=true #### with ros2 launch: ros2 launch realsense2_camera rs_launch.py ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true
## Camera Name And Camera Namespace User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with. ### Example - If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way. - For the first robot and first camera he will run/launch it with these parameters: - camera_namespace: - robot1 - camera_name - D455_1 - With ros2 launch (via command line or by editing these two parameters in the launch file): ```ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1 ``` - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/humble/How-To-Guides/Node-arguments.html)): ```ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1 ``` - Result ``` > ros2 node list /robot1/D455_1 > ros2 topic list /robot1/D455_1/color/camera_info /robot1/D455_1/color/image_raw /robot1/D455_1/color/metadata /robot1/D455_1/depth/camera_info /robot1/D455_1/depth/image_rect_raw /robot1/D455_1/depth/metadata /robot1/D455_1/extrinsics/depth_to_color /robot1/D455_1/imu > ros2 service list /robot1/D455_1/device_info ``` ### Default behavior if non of these parameters are given: - camera_namespace:=camera - camera_name:=camera ``` > ros2 node list /camera/camera > ros2 topic list /camera/camera/color/camera_info /camera/camera/color/image_raw /camera/camera/color/metadata /camera/camera/depth/camera_info /camera/camera/depth/image_rect_raw /camera/camera/depth/metadata /camera/camera/extrinsics/depth_to_color /camera/camera/imu > ros2 service list /camera/camera/device_info ```
## Parameters ### Available Parameters: - For the entire list of parameters type `ros2 param list`. - For reading a parameter value use `ros2 param get
## ROS2(Robot) vs Optical(Camera) Coordination Systems: - Point Of View: - Imagine we are standing behind of the camera, and looking forward. - Always use this point of view when talking about coordinates, left vs right IRs, position of sensor, etc.. ![image](https://user-images.githubusercontent.com/99127997/230150735-bc31fedf-d715-4e35-b462-fe2c338832c3.png) - ROS2 Coordinate System: (X: Forward, Y:Left, Z: Up) - Camera Optical Coordinate System: (X: Right, Y: Down, Z: Forward) - References: [REP-0103](https://www.ros.org/reps/rep-0103.html#coordinate-frame-conventions), [REP-0105](https://www.ros.org/reps/rep-0105.html#coordinate-frames) - All data published in our wrapper topics is optical data taken directly from our camera sensors. - static and dynamic TF topics publish optical CS and ROS CS to give the user the ability to move from one CS to other CS.
## TF from coordinate A to coordinate B: - TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) [Reference](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Transform.html) - In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame - Depth, left IR and "camera_link" coordinates converge together. - Our wrapper provide static TFs between each sensor coordinate to the camera base (camera_link) - Also, it provides TFs from each sensor ROS coordinates to its corrosponding optical coordinates. - Example of static TFs of RGB sensor and Infra2 (right infra) sensor of D435i module as it shown in rviz2: ![example](https://user-images.githubusercontent.com/99127997/230148106-0f79cbdb-c401-4d09-b386-a366af18e5f7.png)
## Extrinsics from sensor A to sensor B: - Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B. - Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B. - For example, depth_to_color, in D435i: - If we look from behind of the D435i, extrinsic from depth to color, means, where is the depth in relative to the color. - If we just look at the X coordinates, in the optical coordiantes (again, from behind) and assume that COLOR(RGB) sensor is (0,0,0), we can say that DEPTH sensor is on the right of RGB by 0.0148m (1.48cm). ![d435i](https://user-images.githubusercontent.com/99127997/230220297-e392f0fc-63bf-4bab-8001-af1ddf0ed00e.png) ``` administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color rotation: - 0.9999583959579468 - 0.008895332925021648 - -0.0020127370953559875 - -0.008895229548215866 - 0.9999604225158691 - 6.045500049367547e-05 - 0.0020131953060626984 - -4.254872692399658e-05 - 0.9999979734420776 translation: - 0.01485931035131216 - 0.0010161789832636714 - 0.0005317096947692335 --- ``` - Extrinsic msg is made up of two parts: - float64[9] rotation (Column - major 3x3 rotation matrix) - float64[3] translation (Three-element translation vector, in meters)
## Published Topics The published topics differ according to the device and parameters. After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `ros2 topic list`): - /camera/camera/aligned_depth_to_color/camera_info - /camera/camera/aligned_depth_to_color/image_raw - /camera/camera/color/camera_info - /camera/camera/color/image_raw - /camera/camera/color/metadata - /camera/camera/depth/camera_info - /camera/camera/depth/color/points - /camera/camera/depth/image_rect_raw - /camera/camera/depth/metadata - /camera/camera/extrinsics/depth_to_color - /camera/camera/imu - /diagnostics - /parameter_events - /rosout - /tf_static This will stream relevant camera sensors and publish on the appropriate ROS topics. Enabling accel and gyro is achieved either by adding the following parameters to the command line:</br> `ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true enable_gyro:=true enable_accel:=true` </br> or in runtime using the following commands: ``` ros2 param set /camera/camera enable_accel true ros2 param set /camera/camera enable_gyro true ``` Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics: - /camera/camera/accel/imu_info - /camera/camera/accel/metadata - /camera/camera/accel/sample - /camera/camera/extrinsics/depth_to_accel - /camera/camera/extrinsics/depth_to_gyro - /camera/camera/gyro/imu_info - /camera/camera/gyro/metadata - /camera/camera/gyro/sample
## RGBD Topic RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag. These boolean paramters should be true to enable rgbd messages: - `enable_rgbd`: new paramter, to enable/disable rgbd topic, changeable during runtime - `align_depth.enable`: align depth images to rgb images - `enable_sync`: let librealsense sync between frames, and get the frameset with color and depth images combined - `enable_color` + `enable_depth`: enable both color and depth sensors The current QoS of the topic itself, is the same as Depth and Color streams (SYSTEM_DEFAULT) Example: ``` ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true ```
## Metadata topic The metadata messages store the camera's available metadata in a *json* format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic: ``` python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata ```
## Post-Processing Filters The following post processing filters are available: - ```align_depth ```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/camera/aligned_depth_to_color/image_raw`. - The pointcloud, if created, will be based on the aligned depth image. - ```colorizer ```: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values . - ```pointcloud ```: will add a pointcloud topic `/camera/camera/depth/color/points`. * The texture of the pointcloud can be modified using the `pointcloud.stream_filter` parameter.</br> * The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting `pointcloud.allow_no_texture_points` to true. * pointcloud is of an unordered format by default. This can be changed by setting `pointcloud.ordered_pc` to true. * The QoS of the pointcloud topic is independent from depth and color streams and can be controlled with the `pointcloud.pointcloud_qos` parameter. - The same set of QoS values are supported as other streams, refer to
## Available services - device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
## Efficient intra-process communication: Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS. You will need to launch a component container and launch our node as a component together with other component nodes. Further details on "Composing multiple nodes in a single process" can be found [here](https://docs.ros.org/en/rolling/Tutorials/Composition.html). Further details on efficient intra-process communication can be found [here](https://docs.ros.org/en/humble/Tutorials/Intra-Process-Communication.html#efficient-intra-process-communication). ### Example #### Manually loading multiple components into the same process * Start the component: ```bash ros2 run rclcpp_components component_container ``` * Add the wrapper: ```bash ros2 component load /ComponentManager realsense2_camera realsense2_camera::RealSenseNodeFactory -e use_intra_process_comms:=true ``` Load other component nodes (consumers of the wrapper topics) in the same way. ### Limitations * Node components are currently not supported on RCLPY * Compressed images using `image_transport` will be disabled as this isn't supported with intra-process communication ### Latency test tool and launch file For getting a sense of the latency reduction, a frame latency reporter tool is available via a launch file. The launch file loads the wrapper and a frame latency reporter tool component into a single container (so the same process). The tool prints out the frame latency (`now - frame.timestamp`) per frame. The tool is not built unless asked for. Turn on `BUILD_TOOLS` during build to have it available: ```bash colcon build --cmake-args '-DBUILD_TOOLS=ON' ``` The launch file accepts a parameter, `intra_process_comms`, controlling whether zero-copy is turned on or not. Default is on: ```bash ros2 launch realsense2_camera rs_intra_process_demo_launch.py intra_process_comms:=true ``` </details> [rolling-badge]: https://img.shields.io/badge/-ROLLING-orange?style=flat-square&logo=ros [rolling]: https://docs.ros.org/en/rolling/index.html [foxy-badge]: https://img.shields.io/badge/-foxy-orange?style=flat-square&logo=ros [foxy]: https://docs.ros.org/en/foxy/index.html [humble-badge]: https://img.shields.io/badge/-HUMBLE-orange?style=flat-square&logo=ros [humble]: https://docs.ros.org/en/humble/index.html [iron-badge]: https://img.shields.io/badge/-IRON-orange?style=flat-square&logo=ros [iron]: https://docs.ros.org/en/iron/index.html [ubuntu22-badge]: https://img.shields.io/badge/-UBUNTU%2022%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu22]: https://releases.ubuntu.com/jammy/ [ubuntu20-badge]: https://img.shields.io/badge/-UBUNTU%2020%2E04-blue?style=flat-square&logo=ubuntu&logoColor=white [ubuntu20]: https://releases.ubuntu.com/focal/
CONTRIBUTING
How to Contribute
This project welcomes third-party code via GitHub pull requests.
You are welcome to propose and discuss enhancements using project issues.
Branching Policy: The
ros2-master
branch is considered stable, at all times. If you plan to propose a patch, please commit into theros2-development
branch, or its own feature branch.
In addition, please run pr_check.sh
under scripts
directory. This scripts verify compliance with project’s standards:
- Every example / source file must refer to LICENSE
- Every example / source file must include correct copyright notice
- For indentation we are using spaces and not tabs
- Line-endings must be Unix and not DOS style
Most common issues can be automatically resolved by running ./pr_check.sh --fix
Please familirize yourself with the Apache License 2.0 before contributing.
Step-by-Step
- Make sure you have
git
andcmake
installed on your system. On Windows we recommend using Git Extensions for git bash. - Run
git clone https://github.com/IntelRealSense/realsense-ros.git
andcd realsense-ros
- To align with latest status of the ros2-development branch, run:
git fetch origin
git checkout ros2-development
git reset --hard origin/ros2-development
-
git checkout -b name_of_your_contribution
to create a dedicated branch - Make your changes to the local repository
- Make sure your local git user is updated, or run
git config --global user.email "email@example.com"
andgit config --global user.user "user"
to set it up. This is the user & email that will appear in GitHub history. -
git add -p
to select the changes you wish to add git commit -m "Description of the change"
- Make sure you have a GitHub user and fork realsense-ros
-
git remote add fork https://github.com/username/realsense-ros.git
with your GitHubusername
git fetch fork
-
git push fork
to pushname_of_your_contribution
branch to your fork - Go to your fork on GitHub at
https://github.com/username/realsense-ros
- Click the
New pull request
button - For
base
combo-box selectros2-development
, since you want to submit a PR to that branch - For
compare
combo-box selectname_of_your_contribution
with your commit - Review your changes and click
Create pull request
- Wait for all automated checks to pass
- The PR will be approved / rejected after review from the team and the community
To continue to new change, goto step 3. To return to your PR (in order to make more changes):
git stash
git checkout name_of_your_contribution
- Repeat items 5-8 from the previous list
-
git push fork
The pull request will be automatically updated