|CI status||No Continuous Integration|
|Tags||No category tags.|
Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)
The Kinesis Video Streams ROS package enables robots to stream video to the cloud for analytics, playback, and archival use. Out of the box, the nodes provided make it possible to encode & stream image data (e.g. video feeds and LIDAR scans) from a ROS “Image” topic to the cloud, enabling you to view the live video feed through the Kinesis Video Console, consume the stream via other applications, or perform intelligent analysis, face detection and face recognition using Amazon Rekognition.
The node will transmit standard
sensor_msgs::msg::Image data from ROS topics to Kinesis Video streams, optionally encoding the images as h264 video frames along the way (using the included h264_video_encoder),
and optionally fetches Amazon Rekognition results from corresponding Kinesis Data Streams and publishing them to local ROS topics.
Note: h.264 hardware encoding is supported out of the box for OMX encoders and has been tested to
work on the Raspberry Pi 3. In all other cases, software encoding would be used, which is significantly more computing intensive and may affect overall system performance. If you wish to use a custom ffmpeg/libav encoder, you may
codec ROS parameter to the encoder node (the name provided must be discoverable by avcodec_find_encoder_by_name). Certain scenarios may require offline caching of video streams which is not yet performed by this node.
Amazon Kinesis Video Streams: Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Recognition Video, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV.
Amazon Rekognition: The easy-to-use Rekognition API allows you to automatically identify objects, people, text, scenes, and activities, as well as detect any inappropriate content. Developers can quickly build a searchable content library to optimize media workflows, enrich recommendation engines by extracting text in images, or integrate secondary authentication into existing applications to enhance end-user security. With a wide variety of use cases, Amazon Rekognition enables you to easily add the benefits of computer vision to your business.
Keywords: ROS, ROS2, AWS, Kinesis Video Streams
The source code is released under Apache 2.0.
Author: AWS RoboMaker
Affiliation: Amazon Web Services (AWS)
RoboMaker cloud extensions rely on third-party software licensed under open-source licenses and are provided for demonstration purposes only. Incorporation or use of RoboMaker cloud extensions in connection with your production workloads or commercial product(s) or devices may affect your legal rights or obligations under the applicable open-source licenses. License information for this repository can be found here. AWS does not provide support for this cloud extension. You are solely responsible for how you configure, deploy, and maintain this cloud extension in your workloads or commercial product(s) or devices.
Supported ROS2 Distributions
You will need to create an AWS Account and configure the credentials to be able to communicate with AWS services. You may find AWS Configuration and Credential Files helpful.
The IAM user will need permissions for the following actions:
For Amazon Rekognition integration, the user will also need permissions for these actions:
Building from Source
To build from source you'll need to create a new workspace, clone and checkout the latest release branch of this repository, install all the dependencies, and compile. If you need the latest development features you can clone from the
master branch instead of the latest release branch. While we guarantee the release branches are stable, the
master should be considered to have an unstable build due to ongoing development.
Create a ROS workspace and a source directory
mkdir -p ~/ros-workspace/src
Clone the package into the source directory
cd ~/ros-workspace/src git clone https://github.com/aws-robotics/kinesisvideo-ros2.git -b release-latest
If this package has not been released yet, also fetch unreleased dependencies:
cd ~/ros-workspace/src/kinesisvideo-ros2 cp .rosinstall.master .rosinstall rosws update
cd ~/ros-workspace && sudo apt-get update rosdep install --from-paths src --ignore-src -r -y
Build the packages
cd ~/ros-workspace && colcon build
Configure ROS library path
Run the unit tests
colcon test --packages-select kinesis_video_streamer && colcon test-result --all
Run the node
- Configure the nodes (for more details, see the extended configuration section below).
- Set up your AWS credentials and make sure you have the required IAM permissions.
- Encoding: review H264 Video Encoder sample configuration file and pay attention to subscription_topic (camera output - expects a
sensor_msgs::msg::Imagetopic) and publication_topic.
- Streaming: review Kinesis Video Streamer sample configuration file - make sure subscription_topic matches the encoder's publication_topic.
- To use Amazon Rekognition for face detection and face recognition, follow the steps on the Rekognition guide (skip steps 8 & 9 as they are already performed by this node): https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html
ros2 launch kinesis_video_streamer kinesis_video_streamer.launch.py
- Example: running on a Raspberry Pi
raspicam2_node __params:=$(ros2 pkg prefix raspicam2)/share/raspicam2/cfg/params.yaml
ros2 launch h264_video_encoder h264_video_encoder_launch.py
ros2 launch kinesis_video_streamer kinesis_video_streamer.launch.py
- Log into your AWS Console to see the availabe Kinesis Video stream.
- For other platforms, replace step 1 with an equivalent command to launch your camera node. Reconfigure the topic names accordingly.
Configuration File and Parameters
Applies to the
kinesis_video_streamer node. For configuring the encoder node, please see the README for the H264 Video Encoder node. An example configuration file called
sample_config.yaml is provided. When the parameters are absent in
the ROS parameter server, default values are used. Since this node makes HTTP requests to AWS endpoints, valid AWS credentials must be provided (this can be done via the environment variables
AWS_SECRET_ACCESS_KEY - see https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html).
Node-wide configuration parameters
The parameters below apply to the node as a whole and are not specific to any one stream.
|aws_client_configuration.region||The AWS region which the video should be streamed to.||string|
|kinesis_video.stream_count||The number of streams you wish to load and transmit. Each stream should have its corresponding parameter set as described below.||int|
|kinesis_video.log4cplus_config||(optional) Config file path for the log4cplus logger, which is used by the Kinesis Video Producer SDK.||string|
Stream-specific configuration parameters
The parameters below should be provided per stream, with the parameter namespace being
|subscription_queue_size||(optional) The maximum number of incoming and outgoing messages to be queued towards the subscribed and publishing topics.||int|
|subscription_topic||Topic name to subscribe for the stream's input.||string|
|topic_type||Specifier for the transport protocol (message type) used. '1' for KinesisVideoFrame (supports h264 streaming), '2' for sensor_msgs::Image transport, '3' for KinesisVideoFrame with AWS Rekognition support.||int|
|stream_name||the name of the stream resource in AWS Kinesis Video Streams.||string|
|rekognition_data_stream||(optional - required if topic type == 3) The name of the Kinesis Data Stream from which AWS Rekognition analysis output should be read.||string|
|rekognition_topic_name||(optional - required if topic type == 3) The ROS topic to which the analysis results should be published.||string|
Additional stream-specific parameters such as frame_rate can be provided to further customize the stream definition structure. See Kinesis header stream definition for the remaining parameters and their default values.
Applies to the
kinesis_video_streamer node; Please see the following README for encoder-specific configuration.
- H264 Video Encoder node
The number of subscriptions is configurable and is determined by the
kinesis_video/stream_count parameter. Each subscription is of the following form:
|Topic Name||Message Type||Description|
|Configurable||Configurable (kinesis_video_msgs/KinesisVideoFrame or sensor_msgs/Image)||The node will subscribe to a topic of a given name. The data is expected to be either images (such as from a camera node publishing Image messages), or video frames (such as from an encoder node publishing KinesisVideoFrame messages).|
Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.
Reporting Bugs/Feature Requests
We welcome you to use the GitHub issue tracker to report bugs or suggest features.
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
Contributing via Pull Requests
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
- You are working against the latest source on the master branch.
- You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
- You open an issue to discuss any significant work - we would hate for your time to be wasted.
To send us a pull request, please:
- Fork the repository.
- Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
- Ensure local tests pass.
- Commit to your fork using clear commit messages.
- Send us a pull request, answering any default questions in the pull request interface.
- Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
Code of Conduct
This project has adopted the Amazon Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact firstname.lastname@example.org with any additional questions or comments.
Security issue notifications
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public github issue.
See the LICENSE file for our project's licensing. We will ask you to confirm the licensing of your contribution.
We may ask you to sign a Contributor License Agreement (CLA) for larger changes.