Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
jazzy

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro kilted showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
rolling

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro ardent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro bouncy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro crystal showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro eloquent showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro dashing showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro galactic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro foxy showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro iron showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro lunar showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro jade showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro indigo showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro hydro showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro kinetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro melodic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange

No version for distro noetic showing humble. Known supported distros are highlighted in the buttons above.
Package symbol

replay_testing package from replay_testing repo

replay_testing

ROS Distro
humble

Package Summary

Tags No category tags.
Version 0.0.2
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/polymathrobotics/replay_testing.git
VCS Type git
VCS Version main
Last Updated 2025-05-15
Dev Status DEVELOPED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A testing library and CLI for replaying ROS nodes.

Additional Links

No additional links.

Maintainers

  • Troy Gibb
  • Emerson Knapp

Authors

  • Troy Gibb

Replay Testing

A ROS2-based framework for configuring, authoring and running replay tests.

Features include:

  • MCAP replay and automatic recording of assets for offline review
  • Baked-in Unittest support for MCAP asserts
  • Parametric sweeps
  • Easy-to-use CMake for running in CI
  • Lightweight CLI for running quickly

What is Replay Tesing?

Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!

All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.

Usage

CLI

ros2 run replay_testing replay_test [REPLAY_TEST_PATH]

For other args:

ros2 run replay_testing replay_test --help

colcon test and CMake

This package exposes CMake you can use for running replay tests as part of your own package’s testing pipeline.

To use:

find_package(replay_testing REQUIRED)

..

if(BUILD_TESTING)
  add_replay_test([REPLAY_TEST_PATH])
endif()


If you’ve set up your CI to persist artifact paths under test_results, you should see a *.xunit.xml file be produced based on the REPLAY_TEST_PATH you provided.

Authoring Replay Tests

Each replay test can be authored into its own file, like my_replay_test.py. We expose a set of Python decorators that you wrap each class for your test.

Replay testing has three distinct phases, all of which are required to run a replay test:

Filter Fixtures @fixtures

For collecting and preparing your fixtures to be run against your launch specification. Duties include:

  • Provides a mechanism for specifying your input fixtures (e.g. lidar_data.mcap)
  • Filtering out any expected output topics that will be produced from the run step.
  • Produces a filtered_fixture.mcap asset that is used against the run step
  • Asserts that specified input topics are present
  • (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack

Here is how you use it:

@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
class FilterFixtures:
    required_input_topics = ["/vehicle/cmd_vel"]
    expected_output_topics = ["/user/cmd_vel"]

Run @run

Specify a launch description that will run against the replayed fixture. Usage:

@run.default()
class Run:
    def generate_launch_description(self) -> LaunchDescription:
        return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

If you’d like to specify a parameter sweep, you can use the variant:

@run.parameterize(
    [
        ReplayRunParams(name="name_of_your_test", params={..}),
    ]
)
class Run:
    def generate_launch_description(
        self, replay_run_params: ReplayRunParams # Keyed by `name`
    ) -> LaunchDescription:
      return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")

File truncated at 100 lines see the full file

CHANGELOG

Changelog for package replay_testing

0.0.2 (2025-05-12)

  • Initial release of replay_testing package
  • Contributors: Troy Gibb, Eddy Zhou, Emerson Knapp

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged replay_testing at Robotics Stack Exchange