Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2022-09-05 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.6.0 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Features
- Lists and exposes all user-settable controls of your camera as ROS 2 parameters.
- Uses
cv_bridge
to convert raw frames to ROS 2 messages, so supports a wide range of encoding conversions. - Supports
image_transport
to enable compression. - Supports composing the camera node and using ROS 2 intra-process commmunication with zero-copy messaging.
Installation
This article details how to build and run this package. It focuses on Raspberry Pi OS with the Raspberry Pi Camera Module V2 but should generalise for most systems.
ROS package install
This package is available from the ROS package repositories and can therefore be installed with the following command and your ROS version name:
sudo apt-get install ros-${ROS_DISTRO}-v4l2-camera
Building from source
If you need to modify the code or ensure that you have the latest updates you will need to clone this repository, and then build the package.
git clone --branch ${ROS_DISTRO} https://gitlab.com/boldhearts/ros2_v4l2_camera.git src/v4l2_camera
rosdep install --from-paths src/v4l2_camera --ignore-src -r -y
colcon build
Most users will also want to set up compressed transport using the dependencies below.
Basic Usage
Run the camera node to publish camera images, using the default parameters:
ros2 run v4l2_camera v4l2_camera_node
You can use rqt-image-view
to preview the images (open another terminal):
sudo apt-get install ros-${ROS_DISTRO}-rqt-image-view
ros2 run rqt_image_view rqt_image_view
See further below for information about enabling compression.
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started. Currently supported:
"YUYV"
or"GREY"
-
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image. Can be any supported by
cv_bridge
given the input pixel format. Currently these are for"YUYV"
:" yuv422_yuy2"
(no conversion), or"mono8"
,"rgb8"
,"bgr8"
,"rgba8"
and"bgra8"
, plus their 16 bit variants, and for"GREY"
these are"mono8"
(no conversion),"rgb8"
,"bgr8"
,"rgba8"
and"bgra8"
, plus their 16 bit variants. -
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
This package uses image_transport
to publish images and make
compression possible. However, by default it only supports raw
transfer, plugins are required to enable compression. These need to be
installed separately:
sudo apt-get install ros-${ROS_DISTRO}-image-transport-plugins
Once installed, they will be automatically used by the driver and
additional topics will be available, including
/image_raw/compressed
.
CONTRIBUTING
Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | foxy |
Last Updated | 2022-08-06 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.5.0 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Installation
This article details how to build and run this package. It focuses on Raspberry Pi OS with the Raspberry Pi Camera Module V2 but should generalise for most systems.
ROS package install
This is available from the ROS package repositories and can therefore be installed with the following command and your ROS version name:
apt-get install ros-<ros_version>-v4l2-camera
Building from source
If you need to modify the code or ensure you have the latest update you will need to clone this repo then build the package.
$ git clone --branch foxy https://gitlab.com/boldhearts/ros2_v4l2_camera.git src/v4l2_camera
$ colcon build
Most users will also want to set up compressed transport using the dependencies below.
Usage
Publish camera images, using the default parameters:
ros2 run v4l2_camera v4l2_camera_node
Preview the image (open another terminal):
ros2 run rqt_image_view rqt_image_view
Dependencies
-
image_transport
- makes it possible to set up compressed transport of the images, as described below.The ROS 2 port of
image_transport
in theimage_common
repository is needed inside of your workspace:git clone --branch ros2 https://github.com/ros-perception/image_common.git src/image_common
Note that
image_transport
only supports raw transport by default and needs additional plugins to actually provide compression; see below how to do this.
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started.
Currently supported:"YUYV"
or"GREY"
-
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image.
Currently supported:"rgb8"
,"yuv422"
or"mono8"
. -
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
time_per_frame
-integer_array
, default: current device settingThe time between two successive frames. The expected value is a ratio defined by an array of 2 integers. For instance, a value of
[1, 30]
sets a period of 1/30, and thus a framrate of 30Hz.If the provided period is not supported, the driver may choose another period near to it. In that case the parameter change is reported to have failed.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
By default image_transport
only supports raw transfer, plugins are
required to enable compression. Standard ones are available in the
image_transport_plugins
repository. These depend on the OpenCV facilities provided by the
vision_opencv
repository. You can clone these into your workspace to
get these:
cd path/to/workspace
git clone https://github.com/ros-perception/vision_opencv.git --branch ros2 src/vision_opencv
git clone https://github.com/ros-perception/image_transport_plugins.git --branch ros2 src/image_transport_plugins
Building: Ubuntu
The following packages are required to be able to build the plugins:
sudo apt install libtheora-dev libogg-dev libboost-python-dev
Building: Arch
To get the plugins compiled on Arch Linux, a few special steps are needed:
- Arch provides OpenCV 4.x, but OpenCV 3.x is required
- Arch provides VTK 8.2, but VTK 8.1 is required
-
boost-python
is used, which needs to be linked to python libs explicitly:colcon build --symlink-install --packages-select cv_bridge --cmake-args "-DCMAKE_CXX_STANDARD_LIBRARIES=-lpython3.7m"
Usage
If the compression plugins are compiled and installed in the current
workspace, they will be automatically used by the driver and an
additional /image_raw/compressed
topic will be available.
Neither Rviz2 or showimage
use image_transport
(yet). Therefore, to
be able to view the compressed topic, it needs to be republished
uncompressed. image_transport
comes with the republish
node to do
this:
ros2 run image_transport republish compressed in/compressed:=image_raw/compressed raw out:=image_raw/uncompressed
The parameters mean:
-
compressed
- the transport to use for input, in this case 'compressed'. Alternative:raw
, to republish the raw/image_raw
topic -
in/compressed:=image_raw/compressed
- by default,republish
uses the topicsin
andout
, orin/compressed
for example if the input transport is 'compressed'. This parameter is a ROS remapping rule to map those names to the actual topic to use. -
raw
- the transport to use for output. If omitted, all available transports are provided. -
out:=image_raw/uncompressed
- remapping of the output topic.
CONTRIBUTING
Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | rolling |
Last Updated | 2023-02-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.6.1 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Features
- Lists and exposes all user-settable controls of your camera as ROS 2 parameters.
- Uses
cv_bridge
to convert raw frames to ROS 2 messages, so supports a wide range of encoding conversions. - Supports
image_transport
to enable compression. - Supports composing the camera node and using ROS 2 intra-process commmunication with zero-copy messaging.
Installation
This article details how to build and run this package. It focuses on Raspberry Pi OS with the Raspberry Pi Camera Module V2 but should generalise for most systems.
ROS package install
This package is available from the ROS package repositories and can therefore be installed with the following command and your ROS version name:
sudo apt-get install ros-${ROS_DISTRO}-v4l2-camera
Building from source
If you need to modify the code or ensure that you have the latest updates you will need to clone this repository, and then build the package.
git clone --branch ${ROS_DISTRO} https://gitlab.com/boldhearts/ros2_v4l2_camera.git src/v4l2_camera
rosdep install --from-paths src/v4l2_camera --ignore-src -r -y
colcon build
Most users will also want to set up compressed transport using the dependencies below.
Basic Usage
Run the camera node to publish camera images, using the default parameters:
ros2 run v4l2_camera v4l2_camera_node
You can use rqt-image-view
to preview the images (open another terminal):
sudo apt-get install ros-${ROS_DISTRO}-rqt-image-view
ros2 run rqt_image_view rqt_image_view
See further below for information about enabling compression.
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started. Currently supported:
"YUYV"
,"UYVY"
, or"GREY"
-
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image. Can be any supported by
cv_bridge
given the input pixel format. Currently these are:-
"YUYV"
:"yuv422_yuy2"
(no conversion), or"mono8"
,"rgb8"
,"bgr8"
,"rgba8"
and"bgra8"
, plus their 16 bit variants -
"UYVY"
:"yuv422"
(no conversion), or"mono8"
,"rgb8"
,"bgr8"
,"rgba8"
and"bgra8"
, plus their 16 bit variants -
"GREY"
:"mono8"
(no conversion),"rgb8"
,"bgr8"
,"rgba8"
and"bgra8"
, plus their 16 bit variants
-
-
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
This package uses image_transport
to publish images and make
compression possible. However, by default it only supports raw
transfer, plugins are required to enable compression. These need to be
installed separately:
sudo apt-get install ros-${ROS_DISTRO}-image-transport-plugins
Once installed, they will be automatically used by the driver and
additional topics will be available, including
/image_raw/compressed
.
CONTRIBUTING
Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2020-05-24 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.1.1 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Custom Dependencies
The following dependencies need to be pulled in manually, because we need new features in them that have not been released yet:
-
common_interfaces
- USB cameras mostly use the YUY2 format, which historically hasn't been supported in ROS. Support has landed recently, but has not been released yet, so you need to clone master into your workspace:git clone https://github.com/ros2/common_interfaces.git src/common_interfaces
-
vision_opencv
- The OpenCV bridge is used to convert between image formats. Support for the YUY2 format has not landed yet (see PR ros-perception/vision_opencv#309. Until that has happened, clone the fork into your workspace:git clone --branch yuv422-yuy2 https://github.com/sgvandijk/vision_opencv.git src/vision_opencv
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started.
Currently only
"YUYV"
is supported, which results in image messages with theYUV422_YUY2
encoding. -
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image. Any format supported by
cv_bridge::cvtColor
is allowed. If this matches the input format from the camera, no conversion and thus no additional overhead is required. -
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
Streaming raw images from a robot to your machine takes up a very
large bandwidth and is especially not feasible through WiFi. By using
image_transport
the images can be compressed to make streaming
possible. However, by default image_transport
only supports raw
transfer, and additional plugins are required to enable
compression.
Standard ones are available in the
image_transport_plugins
repository. You can clone these into your workspace to get these:
git clone --branch ros2 https://github.com/ros-perception/image_transport_plugins.git src/image_transport_plugins
Building: Ubuntu
The following packages are required to be able to build the plugins:
sudo apt install libtheora-dev libogg-dev libboost-python-dev
Building: Arch
To get the plugins compiled on Arch Linux, a few special steps are needed:
- Arch provides OpenCV 4.x, but OpenCV 3.x is required
- Arch provides VTK 8.2, but VTK 8.1 is required
-
boost-python
is used, which needs to be linked to python libs explicitly:colcon build --symlink-install --packages-select cv_bridge --cmake-args "-DCMAKE_CXX_STANDARD_LIBRARIES=-lpython3.7m"
Usage
If the compression plugins are compiled and installed in the current
workspace, they will be automatically used by the driver and
additional /image_raw/compressed
and /image_raw/theora
topics will
be available.
Neither Rviz2 or showimage
use image_transport
(yet) and so can't
use these topics. Therefore, to be able to view a compressed topic,
it needs to be republished uncompressed. image_transport
comes with
the republish
node to do this:
ros2 run image_transport republish compressed in/compressed:=image_raw/compressed raw out:=image_raw/uncompressed
To use the Theora (possibly more bandwidth efficient):
ros2 run image_transport republish theora in/theora:=image_raw/theora raw out:=image_raw/uncompressed
NB: the idea is to run this and do the decompression on your machine, not on the robot, or else you'll be streaming raw data through the network after all anyway.
The parameters mean:
-
compressed
- the transport to use for input, in this case 'compressed'. Alternative:raw
, to republish the raw/image_raw
topic -
in/compressed:=image_raw/compressed
- by default,republish
uses the topicsin
andout
, orin/compressed
for example if the input transport is 'compressed'. This parameter is a ROS remapping rule to map those names to the actual topic to use. -
raw
- the transport to use for output. If omitted, all available transports are provided. -
out:=image_raw/uncompressed
- remapping of the output topic.
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2020-05-24 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.1.1 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Custom Dependencies
The following dependencies need to be pulled in manually, because we need new features in them that have not been released yet:
-
common_interfaces
- USB cameras mostly use the YUY2 format, which historically hasn't been supported in ROS. Support has landed recently, but has not been released yet, so you need to clone master into your workspace:git clone https://github.com/ros2/common_interfaces.git src/common_interfaces
-
vision_opencv
- The OpenCV bridge is used to convert between image formats. Support for the YUY2 format has not landed yet (see PR ros-perception/vision_opencv#309. Until that has happened, clone the fork into your workspace:git clone --branch yuv422-yuy2 https://github.com/sgvandijk/vision_opencv.git src/vision_opencv
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started.
Currently only
"YUYV"
is supported, which results in image messages with theYUV422_YUY2
encoding. -
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image. Any format supported by
cv_bridge::cvtColor
is allowed. If this matches the input format from the camera, no conversion and thus no additional overhead is required. -
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
Streaming raw images from a robot to your machine takes up a very
large bandwidth and is especially not feasible through WiFi. By using
image_transport
the images can be compressed to make streaming
possible. However, by default image_transport
only supports raw
transfer, and additional plugins are required to enable
compression.
Standard ones are available in the
image_transport_plugins
repository. You can clone these into your workspace to get these:
git clone --branch ros2 https://github.com/ros-perception/image_transport_plugins.git src/image_transport_plugins
Building: Ubuntu
The following packages are required to be able to build the plugins:
sudo apt install libtheora-dev libogg-dev libboost-python-dev
Building: Arch
To get the plugins compiled on Arch Linux, a few special steps are needed:
- Arch provides OpenCV 4.x, but OpenCV 3.x is required
- Arch provides VTK 8.2, but VTK 8.1 is required
-
boost-python
is used, which needs to be linked to python libs explicitly:colcon build --symlink-install --packages-select cv_bridge --cmake-args "-DCMAKE_CXX_STANDARD_LIBRARIES=-lpython3.7m"
Usage
If the compression plugins are compiled and installed in the current
workspace, they will be automatically used by the driver and
additional /image_raw/compressed
and /image_raw/theora
topics will
be available.
Neither Rviz2 or showimage
use image_transport
(yet) and so can't
use these topics. Therefore, to be able to view a compressed topic,
it needs to be republished uncompressed. image_transport
comes with
the republish
node to do this:
ros2 run image_transport republish compressed in/compressed:=image_raw/compressed raw out:=image_raw/uncompressed
To use the Theora (possibly more bandwidth efficient):
ros2 run image_transport republish theora in/theora:=image_raw/theora raw out:=image_raw/uncompressed
NB: the idea is to run this and do the decompression on your machine, not on the robot, or else you'll be streaming raw data through the network after all anyway.
The parameters mean:
-
compressed
- the transport to use for input, in this case 'compressed'. Alternative:raw
, to republish the raw/image_raw
topic -
in/compressed:=image_raw/compressed
- by default,republish
uses the topicsin
andout
, orin/compressed
for example if the input transport is 'compressed'. This parameter is a ROS remapping rule to map those names to the actual topic to use. -
raw
- the transport to use for output. If omitted, all available transports are provided. -
out:=image_raw/uncompressed
- remapping of the output topic.
CONTRIBUTING
Any contribution that you make to this repository will be under the Apache 2 License, as dictated by that license:
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Repository Summary
Checkout URI | https://gitlab.com/boldhearts/ros2_v4l2_camera.git |
VCS Type | git |
VCS Version | galactic |
Last Updated | 2022-08-19 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
v4l2_camera | 0.5.0 |
README
v4l2_camera
A ROS 2 camera driver using Video4Linux2 (V4L2).
Installation
This article details how to build and run this package. It focuses on Raspberry Pi OS with the Raspberry Pi Camera Module V2 but should generalise for most systems.
ROS package install
This is available from the ROS package repositories and can therefore be installed with the following command and your ROS version name:
apt-get install ros-<ros_version>-v4l2-camera
Building from source
If you need to modify the code or ensure you have the latest update you will need to clone this repo then build the package.
$ git clone --branch foxy https://gitlab.com/boldhearts/ros2_v4l2_camera.git src/v4l2_camera
$ colcon build
Most users will also want to set up compressed transport using the dependencies below.
Usage
Publish camera images, using the default parameters:
ros2 run v4l2_camera v4l2_camera_node
Preview the image (open another terminal):
ros2 run rqt_image_view rqt_image_view
Dependencies
-
image_transport
- makes it possible to set up compressed transport of the images, as described below.The ROS 2 port of
image_transport
in theimage_common
repository is needed inside of your workspace:git clone --branch ros2 https://github.com/ros-perception/image_common.git src/image_common
Note that
image_transport
only supports raw transport by default and needs additional plugins to actually provide compression; see below how to do this.
Nodes
v4l2_camera_node
The v4l2_camera_node
interfaces with standard V4L2 devices and
publishes images as sensor_msgs/Image
messages.
Published Topics
-
/image_raw
-sensor_msgs/Image
The image.
Parameters
-
video_device
-string
, default:"/dev/video0"
The device the camera is on.
-
pixel_format
-string
, default:"YUYV"
The pixel format to request from the camera. Must be a valid four character 'FOURCC' code supported by V4L2 and by your camera. The node outputs the available formats supported by your camera when started.
Currently supported:"YUYV"
or"GREY"
-
output_encoding
-string
, default:"rgb8"
The encoding to use for the output image.
Currently supported:"rgb8"
,"yuv422"
or"mono8"
. -
image_size
-integer_array
, default:[640, 480]
Width and height of the image.
-
time_per_frame
-integer_array
, default: current device settingThe time between two successive frames. The expected value is a ratio defined by an array of 2 integers. For instance, a value of
[1, 30]
sets a period of 1/30, and thus a framrate of 30Hz.If the provided period is not supported, the driver may choose another period near to it. In that case the parameter change is reported to have failed.
-
Camera Control Parameters
Camera controls, such as brightness, contrast, white balance, etc, are automatically made available as parameters. The driver node enumerates all controls, and creates a parameter for each, with the corresponding value type. The parameter name is derived from the control name reported by the camera driver, made lower case, commas removed, and spaces replaced by underscores. So
Brightness
becomesbrightness
, andWhite Balance, Automatic
becomeswhite_balance_automatic
.
Compressed Transport
By default image_transport
only supports raw transfer, plugins are
required to enable compression. Standard ones are available in the
image_transport_plugins
repository. These depend on the OpenCV facilities provided by the
vision_opencv
repository. You can clone these into your workspace to
get these:
cd path/to/workspace
git clone https://github.com/ros-perception/vision_opencv.git --branch ros2 src/vision_opencv
git clone https://github.com/ros-perception/image_transport_plugins.git --branch ros2 src/image_transport_plugins
Building: Ubuntu
The following packages are required to be able to build the plugins:
sudo apt install libtheora-dev libogg-dev libboost-python-dev
Building: Arch
To get the plugins compiled on Arch Linux, a few special steps are needed:
- Arch provides OpenCV 4.x, but OpenCV 3.x is required
- Arch provides VTK 8.2, but VTK 8.1 is required
-
boost-python
is used, which needs to be linked to python libs explicitly:colcon build --symlink-install --packages-select cv_bridge --cmake-args "-DCMAKE_CXX_STANDARD_LIBRARIES=-lpython3.7m"
Usage
If the compression plugins are compiled and installed in the current
workspace, they will be automatically used by the driver and an
additional /image_raw/compressed
topic will be available.
Neither Rviz2 or showimage
use image_transport
(yet). Therefore, to
be able to view the compressed topic, it needs to be republished
uncompressed. image_transport
comes with the republish
node to do
this:
ros2 run image_transport republish compressed in/compressed:=image_raw/compressed raw out:=image_raw/uncompressed
The parameters mean:
-
compressed
- the transport to use for input, in this case 'compressed'. Alternative:raw
, to republish the raw/image_raw
topic -
in/compressed:=image_raw/compressed
- by default,republish
uses the topicsin
andout
, orin/compressed
for example if the input transport is 'compressed'. This parameter is a ROS remapping rule to map those names to the actual topic to use. -
raw
- the transport to use for output. If omitted, all available transports are provided. -
out:=image_raw/uncompressed
- remapping of the output topic.