ffmpeg_image_transport package from ffmpeg_image_transport repo

ffmpeg_image_transport

Package Summary

Tags No category tags.
Version 2.0.1
License Apache-2
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros-misc-utilities/ffmpeg_image_transport.git
VCS Type git
VCS Version release
Last Updated 2025-03-18
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

ffmpeg_image_transport provides a plugin to image_transport for transparently sending an image stream encoded with ffmpeg.

Additional Links

Maintainers

  • Bernd Pfrommer

Authors

  • Bernd Pfrommer

ROS2 image transport for ffmpeg/libav

This ROS2 image transport plugin supports encoding/decoding with the FFMpeg library, for example encoding h264 and h265 or HEVC, using Nvidia or other hardware acceleration when available.

The publisher plugin of the transport produces ffmpeg image transport messages. These are raw, encoded packets that are then transmitted and decoded by the subscriber plugin of the transport. The transport library contains both the publisher(encoder) and subscriber(decoder) plugin and therefore must be installed on both sides to be useful.

To extract e.g. frames or an mp4 file from a recorded bag, have a look at the ffmpeg_image_transport_tools repository.

Supported systems

Continuous integration is tested under Ubuntu with the following ROS2 distros:

Build Status Build Status Build Status

Installation

From packages

sudo apt-get install ros-${ROS_DISTRO}-ffmpeg-image-transport

From source

Set the following shell variables:

repo=ffmpeg_image_transport
url=https://github.com/ros-misc-utilities/${repo}.git

and follow the instructions here

Make sure to source your workspace’s install/setup.bash afterwards. If all goes well you should see the transport show up:

ros2 run image_transport list_transports

should give output (among other transport plugins):

"image_transport/ffmpeg"
 - Provided by package: ffmpeg_image_transport
 - Publisher: 
      This plugin encodes frames into ffmpeg compressed packets
    
 - Subscriber: 
      This plugin decodes frames from ffmpeg compressed packets

Remember to install the plugin on both hosts, the one that is encoding and the one that is decoding (viewing).

Parameters

Publisher (camera driver)

Here is a list of the available encoding parameters:

  • encoding: the libav (ffmpeg) encoder being used. The default is libx264, which is on-CPU unaccelerated encoding. Depending on your hardware, your encoding options may include the hardware accelerated h264_nvenc or h264_vaapi. You can list all available encoders with ffmpeg --codecs. In the h264 row, look for (encoders).
  • preset: default is empty (“”). Valid values can be for instance slow, ll (low latency) etc. To find out what presets are available, run e.g. fmpeg -hide_banner -f lavfi -i nullsrc -c:v libx264 -preset help -f mp4 - 2>&1
  • profile: For instance baseline, main. See the ffmpeg website.
  • tune: See the ffmpeg website. The default is empty(“”).
  • gop_size: The number of frames between keyframes. Default: 10. The larger this number the more latency you will have, but also the more efficient the compression becomes.
  • bit_rate: The max bit rate [in bits/s] that the encoding will target. Default is 8242880.
  • crf: Constant Rate Factor, affects the image quality. Value range is [0, 51]; 0 is lossless, 23 is default, 51 is worst quality.
  • delay: Not sure what it does, but doesn’t help with delay. Default is empty (“”).
  • pixel_format: Forces a different pixel format for internal conversions. Experimental, don’t use.
  • qmax: Max quantization rate. Defaults to 10. See ffmpeg documentation. The larger this number, the worse the image looks, and the more efficient the encoding.
  • measure_performance: For performance debugging (developers only). Defaults to false.
  • performance_interval: How many frames to wait between logging performance data.

The parameters are under the ffmpeg variable block. If you launch your publisher node (camera driver), you can give it a parameter list on the way like so:

        parameters=[{'ffmpeg_image_transport.encoding': 'hevc_nvenc',
                     'ffmpeg_image_transport.profile': 'main',
                     'ffmpeg_image_transport.preset': 'll',
                     'ffmpeg_image_transport.gop_size': 15}]

See the example launch file for a V4L USB camera

Subscriber (viewer)

The subscriber has only one parameter (map), which is the map between the encoding that was used to encode the frames, and the libav decoder to be used for decoding. The mapping is done by creating entries in the ffmpeg.map parameter, which is prefixed by the image base name, e.g. camera.

For example to tell the subscriber to use the hevc decoder instead of the default hevc_cuvid decoder for decoding incoming hevc_nvenc packets set a parameter like so after you started the viewer:

ros2 param set <name_of_your_viewer_node> camera.image_raw.ffmpeg.map.hevc_nvenc hevc

This is assuming that your viewer node is subscribing to an image /camera/image_raw/ffmpeg.

You also need to refresh the subscription (drop down menu in the viewer) for the parameter to take hold. If anyone ever figures out how to set the parameters when starting the viewer (rather than afterwards!), please update this document.

Republishing

The image_transport allows you to republish the decoded image locally, see for instance here. Here the ROS parameters work as expected to modify the mapping between encoding and decoder.

The following lines shows how to specify the decoder when republishing. For example to decode incoming hevc_nvenc packets with the hevc decoder:

  • ROS 2 Humble:
  ros2 run image_transport republish ffmpeg in/ffmpeg:=image_raw/ffmpeg raw out:=image_raw/uncompressed --ros-args -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  
  • ROS 2 Jazzy:
  ros2 run image_transport republish --ros-args -p in_transport:=ffmpeg -p out_transport:=raw --remap in/ffmpeg:=image_raw/ffmpeg --remap out:=image_raw/uncompressed -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  

Note: The commands below use the Humble syntax and need to be changed as shown here for Jazzy.

Republishing is generally not necessary so long as publisher and subscriber both properly use an image transport. Some nodes however, notably the rosbag player, do not support a proper transport, rendering republishing necessary.

Republishing raw images from rosbags in ffmpeg format

Suppose you have raw images in a rosbag but want to play them across a network using the ffmpeg_image_transport. In this case run a republish node like this (assuming your rosbag topic is /camera/image_raw):

ros2 run image_transport republish raw in:=/camera/image_raw

The republished topic will be under a full transport, meaning you can now view them with e.g. rqt_image_view under the topic /out/ffmpeg.

You can record them in ffmpeg format by e.g ros2 bag record /out/ffmpeg.

Republishing compressed images from rosbags

Let’s say you have stored images as ffmpeg packets in a rosbag under the topic /camera/image_raw/ffmpeg. To view them use this line:

ros2 run image_transport republish ffmpeg --ros-args -r in/ffmpeg:=/camera/image_raw/ffmpeg


This will republish the topic with full image transport support.

Setting encoding parameters when launching camera driver

The launch directory contains an example launch file cam.launch.py that demonstrates how to set encoding profile and preset for e.g. a usb camera.

How to use a custom version of libav (aka ffmpeg)

See the ffmpeg_encoder_decoder repository. There you will also find instructions for hardware accelerated streaming on the NVidia Jetson.

License

This software is issued under the Apache License Version 2.0.

CHANGELOG

Changelog for package ffmpeg_image_transport

2.0.1 (2025-03-17)

  • fix bug that prevents compilation in humble
  • Contributors: Bernd Pfrommer

2.0.0 (2025-03-15)

  • use ffmpeg_encoder_decoder
  • align parameter handling with compressed image transport
  • point to new instructions
  • Contributors: Bernd Pfrommer

1.0.2 (2025-03-07)

  • updated badges and fixed deprecation warnings (#36)
    • updated badges and fixed deprecation warnings
    • only use AV_FRAME_FLAG_KEY when available
  • Configurable CRF (#34)
    • Added CRF support

    * fixed gop parameter in the README examples ---------Co-authored-by: Alexey Shtern <<alexey.shtern@xtend.me>>

  • README: Add usage instructions for Jazzy The syntax mentioned in the README no longer works there.
  • fix typo in link
  • added documentation for enabling NVMPI on the jetson
  • Contributors: Alexey Shtern, Bernd Pfrommer, Danil Tolkachev, Michal Sojka

1.0.1 (2024-04-17)

  • use appropriate header for cv_bridge
  • added documentation and improved cmake exporting
  • move encoder->decoder map to decoder for public use
  • added frame delay control
  • changed install directory so other pkgs can ament_target_depend on this library
  • Contributors: Bernd Pfrommer, Toby Buckley

1.0.0 (2024-01-11)

  • initial release of ROS2 package
  • Contributors: Akshay Srinivasan, Bernd Pfrommer, Francesco Ferroni, Michal Sojka, akssri

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

Recent questions tagged ffmpeg_image_transport at Robotics Stack Exchange

ffmpeg_image_transport package from ffmpeg_image_transport repo

ffmpeg_image_transport

Package Summary

Tags No category tags.
Version 2.0.1
License Apache-2
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros-misc-utilities/ffmpeg_image_transport.git
VCS Type git
VCS Version release
Last Updated 2025-03-18
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

ffmpeg_image_transport provides a plugin to image_transport for transparently sending an image stream encoded with ffmpeg.

Additional Links

Maintainers

  • Bernd Pfrommer

Authors

  • Bernd Pfrommer

ROS2 image transport for ffmpeg/libav

This ROS2 image transport plugin supports encoding/decoding with the FFMpeg library, for example encoding h264 and h265 or HEVC, using Nvidia or other hardware acceleration when available.

The publisher plugin of the transport produces ffmpeg image transport messages. These are raw, encoded packets that are then transmitted and decoded by the subscriber plugin of the transport. The transport library contains both the publisher(encoder) and subscriber(decoder) plugin and therefore must be installed on both sides to be useful.

To extract e.g. frames or an mp4 file from a recorded bag, have a look at the ffmpeg_image_transport_tools repository.

Supported systems

Continuous integration is tested under Ubuntu with the following ROS2 distros:

Build Status Build Status Build Status

Installation

From packages

sudo apt-get install ros-${ROS_DISTRO}-ffmpeg-image-transport

From source

Set the following shell variables:

repo=ffmpeg_image_transport
url=https://github.com/ros-misc-utilities/${repo}.git

and follow the instructions here

Make sure to source your workspace’s install/setup.bash afterwards. If all goes well you should see the transport show up:

ros2 run image_transport list_transports

should give output (among other transport plugins):

"image_transport/ffmpeg"
 - Provided by package: ffmpeg_image_transport
 - Publisher: 
      This plugin encodes frames into ffmpeg compressed packets
    
 - Subscriber: 
      This plugin decodes frames from ffmpeg compressed packets

Remember to install the plugin on both hosts, the one that is encoding and the one that is decoding (viewing).

Parameters

Publisher (camera driver)

Here is a list of the available encoding parameters:

  • encoding: the libav (ffmpeg) encoder being used. The default is libx264, which is on-CPU unaccelerated encoding. Depending on your hardware, your encoding options may include the hardware accelerated h264_nvenc or h264_vaapi. You can list all available encoders with ffmpeg --codecs. In the h264 row, look for (encoders).
  • preset: default is empty (“”). Valid values can be for instance slow, ll (low latency) etc. To find out what presets are available, run e.g. fmpeg -hide_banner -f lavfi -i nullsrc -c:v libx264 -preset help -f mp4 - 2>&1
  • profile: For instance baseline, main. See the ffmpeg website.
  • tune: See the ffmpeg website. The default is empty(“”).
  • gop_size: The number of frames between keyframes. Default: 10. The larger this number the more latency you will have, but also the more efficient the compression becomes.
  • bit_rate: The max bit rate [in bits/s] that the encoding will target. Default is 8242880.
  • crf: Constant Rate Factor, affects the image quality. Value range is [0, 51]; 0 is lossless, 23 is default, 51 is worst quality.
  • delay: Not sure what it does, but doesn’t help with delay. Default is empty (“”).
  • pixel_format: Forces a different pixel format for internal conversions. Experimental, don’t use.
  • qmax: Max quantization rate. Defaults to 10. See ffmpeg documentation. The larger this number, the worse the image looks, and the more efficient the encoding.
  • measure_performance: For performance debugging (developers only). Defaults to false.
  • performance_interval: How many frames to wait between logging performance data.

The parameters are under the ffmpeg variable block. If you launch your publisher node (camera driver), you can give it a parameter list on the way like so:

        parameters=[{'ffmpeg_image_transport.encoding': 'hevc_nvenc',
                     'ffmpeg_image_transport.profile': 'main',
                     'ffmpeg_image_transport.preset': 'll',
                     'ffmpeg_image_transport.gop_size': 15}]

See the example launch file for a V4L USB camera

Subscriber (viewer)

The subscriber has only one parameter (map), which is the map between the encoding that was used to encode the frames, and the libav decoder to be used for decoding. The mapping is done by creating entries in the ffmpeg.map parameter, which is prefixed by the image base name, e.g. camera.

For example to tell the subscriber to use the hevc decoder instead of the default hevc_cuvid decoder for decoding incoming hevc_nvenc packets set a parameter like so after you started the viewer:

ros2 param set <name_of_your_viewer_node> camera.image_raw.ffmpeg.map.hevc_nvenc hevc

This is assuming that your viewer node is subscribing to an image /camera/image_raw/ffmpeg.

You also need to refresh the subscription (drop down menu in the viewer) for the parameter to take hold. If anyone ever figures out how to set the parameters when starting the viewer (rather than afterwards!), please update this document.

Republishing

The image_transport allows you to republish the decoded image locally, see for instance here. Here the ROS parameters work as expected to modify the mapping between encoding and decoder.

The following lines shows how to specify the decoder when republishing. For example to decode incoming hevc_nvenc packets with the hevc decoder:

  • ROS 2 Humble:
  ros2 run image_transport republish ffmpeg in/ffmpeg:=image_raw/ffmpeg raw out:=image_raw/uncompressed --ros-args -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  
  • ROS 2 Jazzy:
  ros2 run image_transport republish --ros-args -p in_transport:=ffmpeg -p out_transport:=raw --remap in/ffmpeg:=image_raw/ffmpeg --remap out:=image_raw/uncompressed -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  

Note: The commands below use the Humble syntax and need to be changed as shown here for Jazzy.

Republishing is generally not necessary so long as publisher and subscriber both properly use an image transport. Some nodes however, notably the rosbag player, do not support a proper transport, rendering republishing necessary.

Republishing raw images from rosbags in ffmpeg format

Suppose you have raw images in a rosbag but want to play them across a network using the ffmpeg_image_transport. In this case run a republish node like this (assuming your rosbag topic is /camera/image_raw):

ros2 run image_transport republish raw in:=/camera/image_raw

The republished topic will be under a full transport, meaning you can now view them with e.g. rqt_image_view under the topic /out/ffmpeg.

You can record them in ffmpeg format by e.g ros2 bag record /out/ffmpeg.

Republishing compressed images from rosbags

Let’s say you have stored images as ffmpeg packets in a rosbag under the topic /camera/image_raw/ffmpeg. To view them use this line:

ros2 run image_transport republish ffmpeg --ros-args -r in/ffmpeg:=/camera/image_raw/ffmpeg


This will republish the topic with full image transport support.

Setting encoding parameters when launching camera driver

The launch directory contains an example launch file cam.launch.py that demonstrates how to set encoding profile and preset for e.g. a usb camera.

How to use a custom version of libav (aka ffmpeg)

See the ffmpeg_encoder_decoder repository. There you will also find instructions for hardware accelerated streaming on the NVidia Jetson.

License

This software is issued under the Apache License Version 2.0.

CHANGELOG

Changelog for package ffmpeg_image_transport

2.0.1 (2025-03-17)

  • fix bug that prevents compilation in humble
  • Contributors: Bernd Pfrommer

2.0.0 (2025-03-15)

  • use ffmpeg_encoder_decoder
  • align parameter handling with compressed image transport
  • point to new instructions
  • Contributors: Bernd Pfrommer

1.0.2 (2025-03-07)

  • updated badges and fixed deprecation warnings (#36)
    • updated badges and fixed deprecation warnings
    • only use AV_FRAME_FLAG_KEY when available
  • Configurable CRF (#34)
    • Added CRF support

    * fixed gop parameter in the README examples ---------Co-authored-by: Alexey Shtern <<alexey.shtern@xtend.me>>

  • README: Add usage instructions for Jazzy The syntax mentioned in the README no longer works there.
  • fix typo in link
  • added documentation for enabling NVMPI on the jetson
  • Contributors: Alexey Shtern, Bernd Pfrommer, Danil Tolkachev, Michal Sojka

1.0.1 (2024-04-17)

  • use appropriate header for cv_bridge
  • added documentation and improved cmake exporting
  • move encoder->decoder map to decoder for public use
  • added frame delay control
  • changed install directory so other pkgs can ament_target_depend on this library
  • Contributors: Bernd Pfrommer, Toby Buckley

1.0.0 (2024-01-11)

  • initial release of ROS2 package
  • Contributors: Akshay Srinivasan, Bernd Pfrommer, Francesco Ferroni, Michal Sojka, akssri

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

Recent questions tagged ffmpeg_image_transport at Robotics Stack Exchange

ffmpeg_image_transport package from ffmpeg_image_transport repo

ffmpeg_image_transport

Package Summary

Tags No category tags.
Version 2.0.1
License Apache-2
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros-misc-utilities/ffmpeg_image_transport.git
VCS Type git
VCS Version release
Last Updated 2025-03-18
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

ffmpeg_image_transport provides a plugin to image_transport for transparently sending an image stream encoded with ffmpeg.

Additional Links

Maintainers

  • Bernd Pfrommer

Authors

  • Bernd Pfrommer

ROS2 image transport for ffmpeg/libav

This ROS2 image transport plugin supports encoding/decoding with the FFMpeg library, for example encoding h264 and h265 or HEVC, using Nvidia or other hardware acceleration when available.

The publisher plugin of the transport produces ffmpeg image transport messages. These are raw, encoded packets that are then transmitted and decoded by the subscriber plugin of the transport. The transport library contains both the publisher(encoder) and subscriber(decoder) plugin and therefore must be installed on both sides to be useful.

To extract e.g. frames or an mp4 file from a recorded bag, have a look at the ffmpeg_image_transport_tools repository.

Supported systems

Continuous integration is tested under Ubuntu with the following ROS2 distros:

Build Status Build Status Build Status

Installation

From packages

sudo apt-get install ros-${ROS_DISTRO}-ffmpeg-image-transport

From source

Set the following shell variables:

repo=ffmpeg_image_transport
url=https://github.com/ros-misc-utilities/${repo}.git

and follow the instructions here

Make sure to source your workspace’s install/setup.bash afterwards. If all goes well you should see the transport show up:

ros2 run image_transport list_transports

should give output (among other transport plugins):

"image_transport/ffmpeg"
 - Provided by package: ffmpeg_image_transport
 - Publisher: 
      This plugin encodes frames into ffmpeg compressed packets
    
 - Subscriber: 
      This plugin decodes frames from ffmpeg compressed packets

Remember to install the plugin on both hosts, the one that is encoding and the one that is decoding (viewing).

Parameters

Publisher (camera driver)

Here is a list of the available encoding parameters:

  • encoding: the libav (ffmpeg) encoder being used. The default is libx264, which is on-CPU unaccelerated encoding. Depending on your hardware, your encoding options may include the hardware accelerated h264_nvenc or h264_vaapi. You can list all available encoders with ffmpeg --codecs. In the h264 row, look for (encoders).
  • preset: default is empty (“”). Valid values can be for instance slow, ll (low latency) etc. To find out what presets are available, run e.g. fmpeg -hide_banner -f lavfi -i nullsrc -c:v libx264 -preset help -f mp4 - 2>&1
  • profile: For instance baseline, main. See the ffmpeg website.
  • tune: See the ffmpeg website. The default is empty(“”).
  • gop_size: The number of frames between keyframes. Default: 10. The larger this number the more latency you will have, but also the more efficient the compression becomes.
  • bit_rate: The max bit rate [in bits/s] that the encoding will target. Default is 8242880.
  • crf: Constant Rate Factor, affects the image quality. Value range is [0, 51]; 0 is lossless, 23 is default, 51 is worst quality.
  • delay: Not sure what it does, but doesn’t help with delay. Default is empty (“”).
  • pixel_format: Forces a different pixel format for internal conversions. Experimental, don’t use.
  • qmax: Max quantization rate. Defaults to 10. See ffmpeg documentation. The larger this number, the worse the image looks, and the more efficient the encoding.
  • measure_performance: For performance debugging (developers only). Defaults to false.
  • performance_interval: How many frames to wait between logging performance data.

The parameters are under the ffmpeg variable block. If you launch your publisher node (camera driver), you can give it a parameter list on the way like so:

        parameters=[{'ffmpeg_image_transport.encoding': 'hevc_nvenc',
                     'ffmpeg_image_transport.profile': 'main',
                     'ffmpeg_image_transport.preset': 'll',
                     'ffmpeg_image_transport.gop_size': 15}]

See the example launch file for a V4L USB camera

Subscriber (viewer)

The subscriber has only one parameter (map), which is the map between the encoding that was used to encode the frames, and the libav decoder to be used for decoding. The mapping is done by creating entries in the ffmpeg.map parameter, which is prefixed by the image base name, e.g. camera.

For example to tell the subscriber to use the hevc decoder instead of the default hevc_cuvid decoder for decoding incoming hevc_nvenc packets set a parameter like so after you started the viewer:

ros2 param set <name_of_your_viewer_node> camera.image_raw.ffmpeg.map.hevc_nvenc hevc

This is assuming that your viewer node is subscribing to an image /camera/image_raw/ffmpeg.

You also need to refresh the subscription (drop down menu in the viewer) for the parameter to take hold. If anyone ever figures out how to set the parameters when starting the viewer (rather than afterwards!), please update this document.

Republishing

The image_transport allows you to republish the decoded image locally, see for instance here. Here the ROS parameters work as expected to modify the mapping between encoding and decoder.

The following lines shows how to specify the decoder when republishing. For example to decode incoming hevc_nvenc packets with the hevc decoder:

  • ROS 2 Humble:
  ros2 run image_transport republish ffmpeg in/ffmpeg:=image_raw/ffmpeg raw out:=image_raw/uncompressed --ros-args -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  
  • ROS 2 Jazzy:
  ros2 run image_transport republish --ros-args -p in_transport:=ffmpeg -p out_transport:=raw --remap in/ffmpeg:=image_raw/ffmpeg --remap out:=image_raw/uncompressed -p "ffmpeg_image_transport.map.hevc_nvenc:=hevc"
  

Note: The commands below use the Humble syntax and need to be changed as shown here for Jazzy.

Republishing is generally not necessary so long as publisher and subscriber both properly use an image transport. Some nodes however, notably the rosbag player, do not support a proper transport, rendering republishing necessary.

Republishing raw images from rosbags in ffmpeg format

Suppose you have raw images in a rosbag but want to play them across a network using the ffmpeg_image_transport. In this case run a republish node like this (assuming your rosbag topic is /camera/image_raw):

ros2 run image_transport republish raw in:=/camera/image_raw

The republished topic will be under a full transport, meaning you can now view them with e.g. rqt_image_view under the topic /out/ffmpeg.

You can record them in ffmpeg format by e.g ros2 bag record /out/ffmpeg.

Republishing compressed images from rosbags

Let’s say you have stored images as ffmpeg packets in a rosbag under the topic /camera/image_raw/ffmpeg. To view them use this line:

ros2 run image_transport republish ffmpeg --ros-args -r in/ffmpeg:=/camera/image_raw/ffmpeg


This will republish the topic with full image transport support.

Setting encoding parameters when launching camera driver

The launch directory contains an example launch file cam.launch.py that demonstrates how to set encoding profile and preset for e.g. a usb camera.

How to use a custom version of libav (aka ffmpeg)

See the ffmpeg_encoder_decoder repository. There you will also find instructions for hardware accelerated streaming on the NVidia Jetson.

License

This software is issued under the Apache License Version 2.0.

CHANGELOG

Changelog for package ffmpeg_image_transport

2.0.1 (2025-03-17)

  • fix bug that prevents compilation in humble
  • Contributors: Bernd Pfrommer

2.0.0 (2025-03-15)

  • use ffmpeg_encoder_decoder
  • align parameter handling with compressed image transport
  • point to new instructions
  • Contributors: Bernd Pfrommer

1.0.2 (2025-03-07)

  • updated badges and fixed deprecation warnings (#36)
    • updated badges and fixed deprecation warnings
    • only use AV_FRAME_FLAG_KEY when available
  • Configurable CRF (#34)
    • Added CRF support

    * fixed gop parameter in the README examples ---------Co-authored-by: Alexey Shtern <<alexey.shtern@xtend.me>>

  • README: Add usage instructions for Jazzy The syntax mentioned in the README no longer works there.
  • fix typo in link
  • added documentation for enabling NVMPI on the jetson
  • Contributors: Alexey Shtern, Bernd Pfrommer, Danil Tolkachev, Michal Sojka

1.0.1 (2024-04-17)

  • use appropriate header for cv_bridge
  • added documentation and improved cmake exporting
  • move encoder->decoder map to decoder for public use
  • added frame delay control
  • changed install directory so other pkgs can ament_target_depend on this library
  • Contributors: Bernd Pfrommer, Toby Buckley

1.0.0 (2024-01-11)

  • initial release of ROS2 package
  • Contributors: Akshay Srinivasan, Bernd Pfrommer, Francesco Ferroni, Michal Sojka, akssri

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

Recent questions tagged ffmpeg_image_transport at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.
No version for distro galactic. Known supported distros are highlighted in the buttons above.
No version for distro foxy. Known supported distros are highlighted in the buttons above.
No version for distro iron. Known supported distros are highlighted in the buttons above.
No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.