-

Repository Summary

Checkout URI https://github.com/aws-robotics/kinesisvideo-ros1.git
VCS Type git
VCS Version master
Last Updated 2022-02-08
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
kinesis_video_msgs 2.0.2
kinesis_video_streamer 2.0.2

README

kinesis_video_streamer

Overview

The Kinesis Video Streams ROS package enables robots to stream video to the cloud for analytics, playback, and archival use. Out of the box, the nodes provided make it possible to encode & stream image data (e.g. video feeds and LIDAR scans) from a ROS “Image” topic to the cloud, enabling you to view the live video feed through the Kinesis Video Console, consume the stream via other applications, or perform intelligent analysis, face detection and face recognition using Amazon Rekognition.

The node will transmit standard sensor_msgs::Image data from ROS topics to Kinesis Video streams, optionally encoding the images as h264 video frames along the way (using the included h264_video_encoder), and optionally fetches Amazon Rekognition results from corresponding Kinesis Data Streams and publishing them to local ROS topics.

Note: h.264 hardware encoding is supported out of the box for OMX encoders and has been tested to work on the Raspberry Pi 3. In all other cases, software encoding would be used, which is significantly more computing intensive and may affect overall system performance. If you wish to use a custom ffmpeg/libav encoder, you may pass a codec ROS parameter to the encoder node (the name provided must be discoverable by avcodec_find_encoder_by_name). Certain scenarios may require offline caching of video streams which is not yet performed by this node.

Amazon Kinesis Video Streams: Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Recognition Video, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV.

Amazon Rekognition: The easy-to-use Rekognition API allows you to automatically identify objects, people, text, scenes, and activities, as well as detect any inappropriate content. Developers can quickly build a searchable content library to optimize media workflows, enrich recommendation engines by extracting text in images, or integrate secondary authentication into existing applications to enhance end-user security. With a wide variety of use cases, Amazon Rekognition enables you to easily add the benefits of computer vision to your business.

Keywords: ROS, AWS, Kinesis Video Streams

License

The source code is released under Apache 2.0.

Author: AWS RoboMaker
Affiliation: Amazon Web Services (AWS)

RoboMaker cloud extensions rely on third-party software licensed under open-source licenses and are provided for demonstration purposes only. Incorporation or use of RoboMaker cloud extensions in connection with your production workloads or commercial product(s) or devices may affect your legal rights or obligations under the applicable open-source licenses. License information for this repository can be found here. AWS does not provide support for this cloud extension. You are solely responsible for how you configure, deploy, and maintain this cloud extension in your workloads or commercial product(s) or devices.

Supported ROS Distributions

  • Kinetic
  • Melodic

Installation

AWS Credentials

You will need to create an AWS Account and configure the credentials to be able to communicate with AWS services. You may find AWS Configuration and Credential Files helpful.

The IAM user will need permissions for the following actions:

  • kinesisvideo:CreateStream
  • kinesisvideo:TagStream
  • kinesisvideo:DescribeStream
  • kinesisvideo:GetDataEndpoint
  • kinesisvideo:PutMedia

For Amazon Rekognition integration, the user will also need permissions for these actions:

  • kinesis:ListShards
  • kinesis:GetShardIterator
  • kinesis:GetRecords

Building from Source

To build from source you’ll need to create a new workspace, clone and checkout the latest release branch of this repository, install all the dependencies, and compile. If you need the latest development features you can clone from the master branch instead of the latest release branch. While we guarantee the release branches are stable, the master should be considered to have an unstable build due to ongoing development.

  • Install build tool: please refer to colcon installation guide

  • Create a ROS workspace and a source directory

      mkdir -p ~/ros-workspace/src
    
  • Clone the package into the source directory .

      cd ~/ros-workspace/src
      git clone https://github.com/aws-robotics/kinesisvideo-ros1.git -b release-latest
    
  • Install dependencies

      cd ~/ros-workspace 
      sudo apt-get update && rosdep update
      rosdep install --from-paths src --ignore-src -r -y
    

Note: If building the master branch instead of a release branch you may need to also checkout and build the master branches of the packages this package depends on.

  • Build the packages

      cd ~/ros-workspace && colcon build
    
  • Configure ROS library Path

      source ~/ros-workspace/install/setup.bash
    
  • Build and run the unit tests

      colcon build --packages-select kinesis_video_streamer --cmake-target tests
      colcon test --packages-select kinesis_video_streamer kinesis_manager && colcon test-result --all
    

Launch Files

A launch file called kinesis_video_streamer.launch is included in this package that gives an example of how to include a stream configuration file when configuring the parameter server for this node. The launch file uses the following arguments:

Arg Name Description
stream_config A path to a rosparam config file for the (first) stream. If not provided, the launch file will default to using the sample_configuration.yaml that was provided with this package.

An example launch file called sample_application.launch is included in this project that gives an example of how you can include this node in your project and provide it with arguments.

Usage

Run the node

  1. Configure the nodes (for more details, see the extended configuration section below).
  2. To use Amazon Rekognition for face detection and face recognition, follow the steps on the Rekognition guide (skip steps 8 & 9 as they are already performed by this node): https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html
  3. Example: running on a Raspberry Pi
    • roslaunch raspicam_nodecamerav2_410x308_30fps.launch
    • roslaunch h264_video_encoder sample_application.launch
    • roslaunch kinesis_video_streamer sample_application.launch
    • Log into your AWS Console to see the availabe Kinesis Video stream.
    • For other platforms, replace step 1 with an equivalent command to launch your camera node. Reconfigure the topic names accordingly.

Configuration File and Parameters

Applies to the kinesis_video_streamer node. For configuring the encoder node, please see the README for the H264 Video Encoder node. An example configuration file called stream_sample_configuration.yaml is provided. When the parameters are absent in the ROS parameter server, default values are used. Since this node makes HTTP requests to AWS endpoints, valid AWS credentials must be provided (this can be done via the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY - see https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html).

Node-wide configuration parameters

The parameters below apply to the node as a whole and are not specific to any one stream.

Parameter Name Description Type
aws_client_configuration/region The AWS region which the video should be streamed to. string
kinesis_video/stream_count The number of streams you wish to load and transmit. Each stream should have its corresponding parameter set as described below. int
kinesis_video/log4cplus_config (optional) Config file path for the log4cplus logger, which is used by the Kinesis Video Producer SDK. string

Stream-specific configuration parameters

The parameters below should be provided per stream, with the prefix being kinesis_video/stream<id>/<parameter name>.

Parameter Name Description Type
subscription_queue_size (optional) The maximum number of incoming and outgoing messages to be queued towards the subscribed and publishing topics. int
subscription_topic Topic name to subscribe for the stream’s input. string
topic_type Specifier for the transport protocol (message type) used. ‘1’ for KinesisVideoFrame (supports h264 streaming), ‘2’ for sensor_msgs::Image transport, ‘3’ for KinesisVideoFrame with AWS Rekognition support. int
stream_name the name of the stream resource in AWS Kinesis Video Streams. string
rekognition_data_stream (optional - required if topic type == 3) The name of the Kinesis Data Stream from which AWS Rekognition analysis output should be read. string
rekognition_topic_name (optional - required if topic type == 3) The ROS topic to which the analysis results should be published. string

Additional stream-specific parameters such as frame_rate can be provided to further customize the stream definition structure. See Kinesis header stream definition for the remaining parameters and their default values.

Performance and Benchmark Results

We evaluated the performance of this node by runnning the following scenario on a Raspberry Pi 3 Model B Plus connected to a Raspberry Pi camera module. The camera output was setup at a rate of 30 fps and resolution of 410x308 pixels, and encoded at a bitrate of 2mbps.

  • Launch a baseline graph containing the talker and listener nodes from the roscpp_tutorials package, plus two additional nodes that collect CPU and memory usage statistics. Allow the nodes to run for 60 seconds.
  • Following the instructions in the “Quick Start” section above, launch a raspicam_node node to get the images from the camera module, then launch a h264_video_encoder node to encode the images, and finally launch a kinesis_video_streamer node to send the image frames to the Amazon Kinesis Video Streams service. Allow the nodes to run for 180 seconds.
  • Terminate the raspicam_node, h264_video_encoder and kinesis_video_streamer nodes, and allow the remaining nodes to run for 60 seconds.

The following graph shows the CPU usage during that scenario. After we start launching the kinesis nodes at second 60, the 1 minute average CPU usage increases from an initial 5.5% for the baseline graph up to a peak of 20.25%, and stabilizes around 15% until we stop the nodes around second 260.

cpu usage

The following graph shows the memory usage during that scenario. Free memory also accounts for additional memory available through a swap partition. After launching the kinesis nodes around second 60, the memory increases from the 292 MB for the baseline graph up to a peak of 392 MB (+34.25%), and stabilizes around 374 MB (+28.08% wrt. baseline graph). The memory usage goes down to 318 MB after stopping the kinesis nodes.

memory usage

Node Details

Applies to the kinesis_video_streamer node; Please see the following README for encoder-specific configuration.

Subscribed Topics

The number of subscriptions is configurable and is determined by the kinesis_video/stream_count parameter. Each subscription is of the following form:

Topic Name Message Type Description
Configurable Configurable (kinesis_video_msgs/KinesisVideoFrame or sensor_msgs/Image) The node will subscribe to a topic of a given name. The data is expected to be either images (such as from a camera node publishing Image messages), or video frames (such as from an encoder node publishing KinesisVideoFrame messages).

CONTRIBUTING

Contributing Guidelines

Thank you for your interest in contributing to our project. Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.

Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.

Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

  • A reproducible test case or series of steps
  • The version of our code being used
  • Any modifications you’ve made relevant to the bug
  • Anything unusual about your environment or deployment

Contributing via Pull Requests

Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

  1. You are working against the latest source on the master branch.
  2. You check existing open, and recently merged, pull requests to make sure someone else hasn’t addressed the problem already.
  3. You open an issue to discuss any significant work - we would hate for your time to be wasted.

To send us a pull request, please:

  1. Fork the repository.
  2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
  3. Ensure local tests pass.
  4. Commit to your fork using clear commit messages.
  5. Send us a pull request, answering any default questions in the pull request interface.
  6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on forking a repository and creating a pull request.

Finding contributions to work on

Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ‘help wanted’ issues is a great place to start.

Code of Conduct

This project has adopted the Amazon Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.

Security issue notifications

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public github issue.

Licensing

See the LICENSE file for our project’s licensing. We will ask you to confirm the licensing of your contribution.

We may ask you to sign a Contributor License Agreement (CLA) for larger changes.


Repository Summary

Checkout URI https://github.com/aws-robotics/kinesisvideo-ros1.git
VCS Type git
VCS Version master
Last Updated 2022-02-08
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
kinesis_video_msgs 2.0.2
kinesis_video_streamer 2.0.2

README

kinesis_video_streamer

Overview

The Kinesis Video Streams ROS package enables robots to stream video to the cloud for analytics, playback, and archival use. Out of the box, the nodes provided make it possible to encode & stream image data (e.g. video feeds and LIDAR scans) from a ROS “Image” topic to the cloud, enabling you to view the live video feed through the Kinesis Video Console, consume the stream via other applications, or perform intelligent analysis, face detection and face recognition using Amazon Rekognition.

The node will transmit standard sensor_msgs::Image data from ROS topics to Kinesis Video streams, optionally encoding the images as h264 video frames along the way (using the included h264_video_encoder), and optionally fetches Amazon Rekognition results from corresponding Kinesis Data Streams and publishing them to local ROS topics.

Note: h.264 hardware encoding is supported out of the box for OMX encoders and has been tested to work on the Raspberry Pi 3. In all other cases, software encoding would be used, which is significantly more computing intensive and may affect overall system performance. If you wish to use a custom ffmpeg/libav encoder, you may pass a codec ROS parameter to the encoder node (the name provided must be discoverable by avcodec_find_encoder_by_name). Certain scenarios may require offline caching of video streams which is not yet performed by this node.

Amazon Kinesis Video Streams: Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Recognition Video, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV.

Amazon Rekognition: The easy-to-use Rekognition API allows you to automatically identify objects, people, text, scenes, and activities, as well as detect any inappropriate content. Developers can quickly build a searchable content library to optimize media workflows, enrich recommendation engines by extracting text in images, or integrate secondary authentication into existing applications to enhance end-user security. With a wide variety of use cases, Amazon Rekognition enables you to easily add the benefits of computer vision to your business.

Keywords: ROS, AWS, Kinesis Video Streams

License

The source code is released under Apache 2.0.

Author: AWS RoboMaker
Affiliation: Amazon Web Services (AWS)

RoboMaker cloud extensions rely on third-party software licensed under open-source licenses and are provided for demonstration purposes only. Incorporation or use of RoboMaker cloud extensions in connection with your production workloads or commercial product(s) or devices may affect your legal rights or obligations under the applicable open-source licenses. License information for this repository can be found here. AWS does not provide support for this cloud extension. You are solely responsible for how you configure, deploy, and maintain this cloud extension in your workloads or commercial product(s) or devices.

Supported ROS Distributions

  • Kinetic
  • Melodic

Installation

AWS Credentials

You will need to create an AWS Account and configure the credentials to be able to communicate with AWS services. You may find AWS Configuration and Credential Files helpful.

The IAM user will need permissions for the following actions:

  • kinesisvideo:CreateStream
  • kinesisvideo:TagStream
  • kinesisvideo:DescribeStream
  • kinesisvideo:GetDataEndpoint
  • kinesisvideo:PutMedia

For Amazon Rekognition integration, the user will also need permissions for these actions:

  • kinesis:ListShards
  • kinesis:GetShardIterator
  • kinesis:GetRecords

Building from Source

To build from source you’ll need to create a new workspace, clone and checkout the latest release branch of this repository, install all the dependencies, and compile. If you need the latest development features you can clone from the master branch instead of the latest release branch. While we guarantee the release branches are stable, the master should be considered to have an unstable build due to ongoing development.

  • Install build tool: please refer to colcon installation guide

  • Create a ROS workspace and a source directory

      mkdir -p ~/ros-workspace/src
    
  • Clone the package into the source directory .

      cd ~/ros-workspace/src
      git clone https://github.com/aws-robotics/kinesisvideo-ros1.git -b release-latest
    
  • Install dependencies

      cd ~/ros-workspace 
      sudo apt-get update && rosdep update
      rosdep install --from-paths src --ignore-src -r -y
    

Note: If building the master branch instead of a release branch you may need to also checkout and build the master branches of the packages this package depends on.

  • Build the packages

      cd ~/ros-workspace && colcon build
    
  • Configure ROS library Path

      source ~/ros-workspace/install/setup.bash
    
  • Build and run the unit tests

      colcon build --packages-select kinesis_video_streamer --cmake-target tests
      colcon test --packages-select kinesis_video_streamer kinesis_manager && colcon test-result --all
    

Launch Files

A launch file called kinesis_video_streamer.launch is included in this package that gives an example of how to include a stream configuration file when configuring the parameter server for this node. The launch file uses the following arguments:

Arg Name Description
stream_config A path to a rosparam config file for the (first) stream. If not provided, the launch file will default to using the sample_configuration.yaml that was provided with this package.

An example launch file called sample_application.launch is included in this project that gives an example of how you can include this node in your project and provide it with arguments.

Usage

Run the node

  1. Configure the nodes (for more details, see the extended configuration section below).
  2. To use Amazon Rekognition for face detection and face recognition, follow the steps on the Rekognition guide (skip steps 8 & 9 as they are already performed by this node): https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html
  3. Example: running on a Raspberry Pi
    • roslaunch raspicam_nodecamerav2_410x308_30fps.launch
    • roslaunch h264_video_encoder sample_application.launch
    • roslaunch kinesis_video_streamer sample_application.launch
    • Log into your AWS Console to see the availabe Kinesis Video stream.
    • For other platforms, replace step 1 with an equivalent command to launch your camera node. Reconfigure the topic names accordingly.

Configuration File and Parameters

Applies to the kinesis_video_streamer node. For configuring the encoder node, please see the README for the H264 Video Encoder node. An example configuration file called stream_sample_configuration.yaml is provided. When the parameters are absent in the ROS parameter server, default values are used. Since this node makes HTTP requests to AWS endpoints, valid AWS credentials must be provided (this can be done via the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY - see https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html).

Node-wide configuration parameters

The parameters below apply to the node as a whole and are not specific to any one stream.

Parameter Name Description Type
aws_client_configuration/region The AWS region which the video should be streamed to. string
kinesis_video/stream_count The number of streams you wish to load and transmit. Each stream should have its corresponding parameter set as described below. int
kinesis_video/log4cplus_config (optional) Config file path for the log4cplus logger, which is used by the Kinesis Video Producer SDK. string

Stream-specific configuration parameters

The parameters below should be provided per stream, with the prefix being kinesis_video/stream<id>/<parameter name>.

Parameter Name Description Type
subscription_queue_size (optional) The maximum number of incoming and outgoing messages to be queued towards the subscribed and publishing topics. int
subscription_topic Topic name to subscribe for the stream’s input. string
topic_type Specifier for the transport protocol (message type) used. ‘1’ for KinesisVideoFrame (supports h264 streaming), ‘2’ for sensor_msgs::Image transport, ‘3’ for KinesisVideoFrame with AWS Rekognition support. int
stream_name the name of the stream resource in AWS Kinesis Video Streams. string
rekognition_data_stream (optional - required if topic type == 3) The name of the Kinesis Data Stream from which AWS Rekognition analysis output should be read. string
rekognition_topic_name (optional - required if topic type == 3) The ROS topic to which the analysis results should be published. string

Additional stream-specific parameters such as frame_rate can be provided to further customize the stream definition structure. See Kinesis header stream definition for the remaining parameters and their default values.

Performance and Benchmark Results

We evaluated the performance of this node by runnning the following scenario on a Raspberry Pi 3 Model B Plus connected to a Raspberry Pi camera module. The camera output was setup at a rate of 30 fps and resolution of 410x308 pixels, and encoded at a bitrate of 2mbps.

  • Launch a baseline graph containing the talker and listener nodes from the roscpp_tutorials package, plus two additional nodes that collect CPU and memory usage statistics. Allow the nodes to run for 60 seconds.
  • Following the instructions in the “Quick Start” section above, launch a raspicam_node node to get the images from the camera module, then launch a h264_video_encoder node to encode the images, and finally launch a kinesis_video_streamer node to send the image frames to the Amazon Kinesis Video Streams service. Allow the nodes to run for 180 seconds.
  • Terminate the raspicam_node, h264_video_encoder and kinesis_video_streamer nodes, and allow the remaining nodes to run for 60 seconds.

The following graph shows the CPU usage during that scenario. After we start launching the kinesis nodes at second 60, the 1 minute average CPU usage increases from an initial 5.5% for the baseline graph up to a peak of 20.25%, and stabilizes around 15% until we stop the nodes around second 260.

cpu usage

The following graph shows the memory usage during that scenario. Free memory also accounts for additional memory available through a swap partition. After launching the kinesis nodes around second 60, the memory increases from the 292 MB for the baseline graph up to a peak of 392 MB (+34.25%), and stabilizes around 374 MB (+28.08% wrt. baseline graph). The memory usage goes down to 318 MB after stopping the kinesis nodes.

memory usage

Node Details

Applies to the kinesis_video_streamer node; Please see the following README for encoder-specific configuration.

Subscribed Topics

The number of subscriptions is configurable and is determined by the kinesis_video/stream_count parameter. Each subscription is of the following form:

Topic Name Message Type Description
Configurable Configurable (kinesis_video_msgs/KinesisVideoFrame or sensor_msgs/Image) The node will subscribe to a topic of a given name. The data is expected to be either images (such as from a camera node publishing Image messages), or video frames (such as from an encoder node publishing KinesisVideoFrame messages).

CONTRIBUTING

Contributing Guidelines

Thank you for your interest in contributing to our project. Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.

Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.

Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

  • A reproducible test case or series of steps
  • The version of our code being used
  • Any modifications you’ve made relevant to the bug
  • Anything unusual about your environment or deployment

Contributing via Pull Requests

Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

  1. You are working against the latest source on the master branch.
  2. You check existing open, and recently merged, pull requests to make sure someone else hasn’t addressed the problem already.
  3. You open an issue to discuss any significant work - we would hate for your time to be wasted.

To send us a pull request, please:

  1. Fork the repository.
  2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
  3. Ensure local tests pass.
  4. Commit to your fork using clear commit messages.
  5. Send us a pull request, answering any default questions in the pull request interface.
  6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on forking a repository and creating a pull request.

Finding contributions to work on

Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ‘help wanted’ issues is a great place to start.

Code of Conduct

This project has adopted the Amazon Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.

Security issue notifications

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public github issue.

Licensing

See the LICENSE file for our project’s licensing. We will ask you to confirm the licensing of your contribution.

We may ask you to sign a Contributor License Agreement (CLA) for larger changes.