Package Summary

Tags No category tags.
Version 1.0.5
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version humble
Last Updated 2024-02-17
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Aditya Pande
  • Michel Hidalgo

Authors

  • Dirk Thomas
  • Esteve Fernandez
  • Pete Baughman
  • William Woodall

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc_launch_test.py.

Run the example by doing:

launch_test test/launch_testing/examples/good_proc_launch_test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestProcessOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, process, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • process:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], process, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The process, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, process, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • process and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, process=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, process, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(process, cmd_args=None, *, timeout=10)

  • process and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args test/launch_testing/examples/args_launch_test.py
launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

hello_world_launch_test.py

Usage:

launch_test test/launch_testing/examples/hello_world_launch_test.py

This test is a simple example on how to use the launch_testing.

It launches a process and asserts that it prints "hello_world" to stdout using proc_output.assertWaitFor(). Finally, it checks if the process exits normally (zero exit code).

The @launch_testing.markers.keep_alive decorator ensures that the launch process stays alive long enough for the tests to run.

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc.py (source found in the example_processes folder). good_proc.py is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

1.0.5 (2024-02-16)

1.0.4 (2023-01-10)

  • Allow ReadyToTest() usage in event handler (#681)
  • Inherit markers from generate_test_description (#670) (#674)
  • Contributors: Nikolai Morin, mergify[bot]

1.0.3 (2022-10-18)

  • Fix Typo (#641) (#644)
  • Switch to using a comprehension for process_names. (#614) (#615)
  • Contributors: mergify[bot]

1.0.2 (2022-05-10)

1.0.1 (2022-04-13)

1.0.0 (2022-04-12)

  • Removed the deprecated [ready_fn]{.title-ref} feature (#589)
  • Contributors: William Woodall

0.23.1 (2022-04-08)

0.23.0 (2022-03-30)

0.22.0 (2022-03-28)

0.21.1 (2022-03-01)

  • Added case for instances of ExecuteLocal in resolveProcess function (#587)
  • Add compatitibility with pytest 7 (#592)
  • Contributors: Matt Lanting, Shane Loretz

0.21.0 (2022-01-14)

  • Renamed three files from example_processes (#573)
  • Fix launch_testing README.md proc keyword to process. (#554) (#560)
  • Contributors: Jacob Perron, Khush Jain

0.20.0 (2021-11-29)

  • Declare frontend group dependency & use explicit dependencies in launch_testing (#520)
  • Update maintainers to Aditya Pande and Michel Hidalgo (#559)
  • Updated maintainers (#555)
  • First prototype of native pytest plugin for launch based tests (#528)
  • Adding Executable description class (#454)
  • Add a \"hello world\" style example (#532)
  • Contributors: Aditya Pande, Audrow Nash, Christophe Bedard, Ivan Santiago Paunovic, roger-strain

0.19.0 (2021-07-15)

0.18.0 (2021-06-18)

0.17.0 (2021-04-06)

0.16.0 (2021-03-19)

  • Use unittest.mock instead of mock (#487)
  • Contributors: Michel Hidalgo

0.15.0 (2021-01-25)

0.14.0 (2020-12-08)

0.13.0 (2020-11-04)

  • Update package maintainers (#465)
  • Contributors: Michel Hidalgo

0.12.0 (2020-08-18)

0.11.1 (2020-08-14)

0.11.0 (2020-08-04)

  • Disable cleanup of test cases once they have been run (#406)
  • Fix max() with empty sequence (#440)
  • Use unittest.TestCase.id() for pytest failure reprs. (#436)
  • Use unittest.TestCase.id() to put together jUnit XML output. (#435)
  • Claim ownership (#433)
  • Contributors: Dirk Thomas, Michel Hidalgo, Scott K Logan, William Woodall

0.10.2 (2020-05-26)

  • Set junit_family to xunit2 in pytest.ini
  • Stop using implicit variables in example testing.
  • Switch to from_parent to remove deprecation warning.
  • Fix new flake8 errors. (#420)
  • Remove uses of deprecated ready_fn. (#419)
  • Contributors: Chris Lalancette, Michel Hidalgo

0.10.1 (2020-05-08)

  • fixed depcrecation warning of imp to importlib (issue #387) (#407)
  • wait_for_ouput() repr includes actual text (#408)
  • Contributors: Shane Loretz, Zahi Kakish

0.10.0 (2020-04-24)

  • Improve jUnit output for launch tests when run with py.test (#404)
  • avoid deprecation warning, use from_parent (#402)
  • Warn that old-style ready_fn and test attributes will be deprecated (#346)
  • more verbose test_flake8 error messages (same as ros2/launch_ros#135)
  • stop using constructors deprecated in pytest 5.4 (#391)
  • Add the ability to assert in stdout or stderr. (#378)
  • Add delay parameter to retry_on_failure decorator (#390)
  • Make RegisterEventHandler describe its sub-entities (#386)
  • Import test file without contaminating sys.modules (#360)
  • Update reference to example launch test file (#363)
  • Use imperative mood in docstrings. (#362)
  • Fix a documentation typo. (#361)
  • Fix junit XML when launch dies early (#358)
  • Contributors: Chris Lalancette, Dan Rose, Dirk Thomas, Jacob Perron, Michel Hidalgo, Peter Baughman, Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
async_web_server_cpp github-fkie-async_web_server_cpp
diagnostic_updater github-ros-diagnostics
flexbe_core github-flexbe-flexbe_behavior_engine
flexbe_onboard github-flexbe-flexbe_behavior_engine
flexbe_testing github-flexbe-flexbe_behavior_engine
fmi_adapter github-boschresearch-fmi_adapter
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
swri_transform_util github-swri-robotics-marti_common
novatel_oem7_driver github-novatel-novatel_oem7_driver
pcl_ros github-ros-perception-perception_pcl
realsense2_camera github-IntelRealSense-realsense-ros
robot_calibration github-mikeferguson-robot_calibration
rosbridge_server github-RobotWebTools-rosbridge_suite
slam_toolbox github-SteveMacenski-slam_toolbox
joy_teleop github-ros-teleop-teleop_tools
twist_mux github-ros-teleop-twist_mux
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
dolly_tests github-chapulina-dolly
launch_testing_examples github-ros2-examples
kobuki_velocity_smoother github-kobuki-base-kobuki_velocity_smoother
launch_pytest github-ros2-launch
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
nav2_amcl github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_core github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
rcl github-ros2-rcl
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
joint_limits_interface github-ros-controls-ros2_control
plansys2_bringup github-IntelligentRoboticsLabs-ros2_planning_system
ros2action github-ros2-ros2cli
ros2doctor github-ros2-ros2cli
ros2interface github-ros2-ros2cli
ros2lifecycle github-ros2-ros2cli
ros2node github-ros2-ros2cli
ros2param github-ros2-ros2cli
ros2pkg github-ros2-ros2cli
ros2service github-ros2-ros2cli
ros2topic github-ros2-ros2cli
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing
ros2bag github-ros2-rosbag2
test_cli_remapping github-ros2-system_tests
test_communication github-ros2-system_tests
test_rclcpp github-ros2-system_tests
test_security github-ros2-system_tests
ros_core github-ros2-variants
webots_ros2_tests github-cyberbotics-webots_ros2
domain_bridge github-ros2-domain_bridge
ros2launch_security_examples github-osrf-ros2launch_security
ros_gz_bridge github-gazebosim-ros_gz
test_ros_gz_bridge github-gazebosim-ros_gz
stcamera_components github-ose-support-ros-stcamera_ros2
stcamera_grabber github-ose-support-ros-stcamera_ros2
stcamera_launch github-ose-support-ros-stcamera_ros2

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 2.0.2
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version iron
Last Updated 2023-07-13
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Aditya Pande
  • Brandon Ong
  • William Woodall

Authors

  • Dirk Thomas
  • Esteve Fernandez
  • Michel Hidalgo
  • Pete Baughman
  • William Woodall

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are available to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc_launch_test.py.

Run the example by doing:

launch_test test/launch_testing/examples/good_proc_launch_test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestProcessOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the process under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, process, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • process:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], process, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The process, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, process, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • process and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, process=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, process, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(process, cmd_args=None, *, timeout=10)

  • process and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args test/launch_testing/examples/args_launch_test.py
launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

hello_world_launch_test.py

Usage:

launch_test test/launch_testing/examples/hello_world_launch_test.py

This test is a simple example on how to use the launch_testing.

It launches a process and asserts that it prints "hello_world" to stdout using proc_output.assertWaitFor(). Finally, it checks if the process exits normally (zero exit code).

The @launch_testing.markers.keep_alive decorator ensures that the launch process stays alive long enough for the tests to run.

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc.py (source found in the example_processes folder). good_proc.py is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

2.0.2 (2023-07-14)

2.0.1 (2023-04-12)

2.0.0 (2023-04-11)

  • Improve type checking (#679)
  • Contributors: Herv

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
async_web_server_cpp github-fkie-async_web_server_cpp
diagnostic_updater github-ros-diagnostics
flexbe_core github-flexbe-flexbe_behavior_engine
flexbe_onboard github-flexbe-flexbe_behavior_engine
flexbe_testing github-flexbe-flexbe_behavior_engine
fmi_adapter github-boschresearch-fmi_adapter
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
swri_transform_util github-swri-robotics-marti_common
pcl_ros github-ros-perception-perception_pcl
realsense2_camera github-IntelRealSense-realsense-ros
robot_calibration github-mikeferguson-robot_calibration
rosbridge_server github-RobotWebTools-rosbridge_suite
slam_toolbox github-SteveMacenski-slam_toolbox
joy_teleop github-ros-teleop-teleop_tools
twist_mux github-ros-teleop-twist_mux
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
dolly_tests github-chapulina-dolly
launch_testing_examples github-ros2-examples
kobuki_velocity_smoother github-kobuki-base-kobuki_velocity_smoother
launch_pytest github-ros2-launch
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
nav2_amcl github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_core github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
rcl github-ros2-rcl
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
joint_limits_interface github-ros-controls-ros2_control
plansys2_bringup github-PlanSys2-ros2_planning_system
ros2action github-ros2-ros2cli
ros2doctor github-ros2-ros2cli
ros2interface github-ros2-ros2cli
ros2lifecycle github-ros2-ros2cli
ros2node github-ros2-ros2cli
ros2param github-ros2-ros2cli
ros2pkg github-ros2-ros2cli
ros2service github-ros2-ros2cli
ros2topic github-ros2-ros2cli
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing
ros2bag github-ros2-rosbag2
test_cli_remapping github-ros2-system_tests
test_communication github-ros2-system_tests
test_rclcpp github-ros2-system_tests
test_security github-ros2-system_tests
ros_core github-ros2-variants
webots_ros2_tests github-cyberbotics-webots_ros2
domain_bridge github-ros2-domain_bridge
ros2launch_security_examples github-osrf-ros2launch_security
rcl_logging_rcutils github-sloretz-rcl_logging_rcutils
ros_gz_bridge github-gazebosim-ros_gz
test_ros_gz_bridge github-gazebosim-ros_gz

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 3.4.0
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version rolling
Last Updated 2024-03-13
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Aditya Pande
  • Brandon Ong
  • William Woodall

Authors

  • Dirk Thomas
  • Esteve Fernandez
  • Michel Hidalgo
  • Pete Baughman
  • William Woodall

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are available to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc_launch_test.py.

Run the example by doing:

launch_test test/launch_testing/examples/good_proc_launch_test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestProcessOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the process under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, process, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • process:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], process, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The process, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, process, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • process and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, process=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, process, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(process, cmd_args=None, *, timeout=10)

  • process and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args test/launch_testing/examples/args_launch_test.py
launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

hello_world_launch_test.py

Usage:

launch_test test/launch_testing/examples/hello_world_launch_test.py

This test is a simple example on how to use the launch_testing.

It launches a process and asserts that it prints "hello_world" to stdout using proc_output.assertWaitFor(). Finally, it checks if the process exits normally (zero exit code).

The @launch_testing.markers.keep_alive decorator ensures that the launch process stays alive long enough for the tests to run.

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc.py (source found in the example_processes folder). good_proc.py is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

3.4.0 (2024-02-07)

3.3.0 (2024-01-24)

3.2.1 (2023-12-26)

3.2.0 (2023-10-04)

  • to open expected outpout file with an encoding parameter (#717)
  • Contributors: Chen Lihui

3.1.0 (2023-09-08)

3.0.1 (2023-09-07)

3.0.0 (2023-08-21)

2.2.1 (2023-07-11)

2.2.0 (2023-06-07)

2.1.0 (2023-04-27)

2.0.1 (2023-04-12)

2.0.0 (2023-04-11)

  • Improve type checking (#679)
  • Contributors: Herv

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
async_web_server_cpp github-fkie-async_web_server_cpp
diagnostic_updater github-ros-diagnostics
flexbe_core github-flexbe-flexbe_behavior_engine
flexbe_onboard github-flexbe-flexbe_behavior_engine
flexbe_testing github-flexbe-flexbe_behavior_engine
fmi_adapter github-boschresearch-fmi_adapter
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
swri_transform_util github-swri-robotics-marti_common
pcl_ros github-ros-perception-perception_pcl
robot_calibration github-mikeferguson-robot_calibration
rosbridge_server github-RobotWebTools-rosbridge_suite
joy_teleop github-ros-teleop-teleop_tools
twist_mux github-ros-teleop-twist_mux
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
dolly_tests github-chapulina-dolly
launch_testing_examples github-ros2-examples
kobuki_velocity_smoother github-kobuki-base-kobuki_velocity_smoother
launch_pytest github-ros2-launch
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
rcl github-ros2-rcl
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
ros2action github-ros2-ros2cli
ros2doctor github-ros2-ros2cli
ros2interface github-ros2-ros2cli
ros2lifecycle github-ros2-ros2cli
ros2node github-ros2-ros2cli
ros2param github-ros2-ros2cli
ros2pkg github-ros2-ros2cli
ros2service github-ros2-ros2cli
ros2topic github-ros2-ros2cli
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing
ros2bag github-ros2-rosbag2
test_cli_remapping github-ros2-system_tests
test_communication github-ros2-system_tests
test_rclcpp github-ros2-system_tests
test_security github-ros2-system_tests
ros_core github-ros2-variants
webots_ros2_tests github-cyberbotics-webots_ros2
domain_bridge github-ros2-domain_bridge
ros2launch_security_examples github-osrf-ros2launch_security
rcl_logging_rcutils github-sloretz-rcl_logging_rcutils
ros_gz_bridge github-gazebosim-ros_gz
test_ros_gz_bridge github-gazebosim-ros_gz

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.

launch_testing package from launch repo

launch launch_testing

Package Summary

Tags No category tags.
Version 0.4.0
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version ardent
Last Updated 2017-12-09
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Helper scripts for tests that use the ROS launch tool.

Additional Links

No additional links.

Maintainers

  • Dirk Thomas

Authors

  • Esteve Fernandez
README
No README found. No README in repository either.
CHANGELOG
No CHANGELOG found.

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

launch_testing package from launch repo

launch launch_ros launch_testing ros2launch

Package Summary

Tags No category tags.
Version 0.6.0
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version bouncy
Last Updated 2018-08-22
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Helper scripts for tests that use the ROS launch tool.

Additional Links

No additional links.

Maintainers

  • Dirk Thomas

Authors

  • Esteve Fernandez
README
No README found. No README in repository either.
CHANGELOG

Changelog for package launch_testing

0.6.0 (2018-08-20)

0.5.2 (2018-07-17)

0.5.1 (2018-06-27)

0.5.0 (2018-06-19)

  • Updated to use new launch.legacy namespace (#73)
  • Contributors: Dirk Thomas, Mikael Arguedas, William Woodall

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.7.4
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version crystal
Last Updated 2019-04-02
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Helper scripts for tests that use the ROS launch tool.

Additional Links

No additional links.

Maintainers

  • Dirk Thomas

Authors

  • Esteve Fernandez
README
No README found. No README in repository either.
CHANGELOG

Changelog for package launch_testing

0.7.4 (2019-04-02)

0.7.3 (2018-12-13)

0.7.2 (2018-12-06)

0.7.1 (2018-11-16)

  • Fixed setup.py versions (#155)
  • Contributors: Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Package Dependencies

System Dependencies

Dependant Packages

Name Repo Deps
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
nav2_amcl github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_bt_navigator github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
costmap_queue github-ros-planning-navigation2
dwb_controller github-ros-planning-navigation2
dwb_core github-ros-planning-navigation2
dwb_critics github-ros-planning-navigation2
dwb_msgs github-ros-planning-navigation2
dwb_plugins github-ros-planning-navigation2
nav2_dwb_controller github-ros-planning-navigation2
nav_2d_msgs github-ros-planning-navigation2
nav_2d_utils github-ros-planning-navigation2
nav2_dynamic_params github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_mission_executor github-ros-planning-navigation2
nav2_motion_primitives github-ros-planning-navigation2
nav2_msgs github-ros-planning-navigation2
nav2_navfn_planner github-ros-planning-navigation2
nav2_robot github-ros-planning-navigation2
nav2_simple_navigator github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
nav2_tasks github-ros-planning-navigation2
nav2_util github-ros-planning-navigation2
nav2_voxel_grid github-ros-planning-navigation2
nav2_world_model github-ros-planning-navigation2
navigation2 github-ros-planning-navigation2
rcl_yaml_param_parser github-ros2-rcl
rcl_logging_log4cxx github-ros2-rcl_logging
rcl_logging_noop github-ros2-rcl_logging
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.9.7
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version eloquent
Last Updated 2020-12-04
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Pete Baughman

Authors

  • Pete Baughman
  • Dirk Thomas
  • Esteve Fernandez

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc.test.py.

Run the example by doing:

launch_test launch_testing/examples/good_proc.test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestNodeOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

In older style tests, a function called ready_fn is declared as an argument to generate_test_description and must be plumbed into the launch description with an OpaqueFunction.

def generate_test_description(ready_fn):

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example
        launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
    ])

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • proc:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The proc, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, proc, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • proc and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, proc=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, proc, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(proc, cmd_args=None, *, timeout=10)

  • proc and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args examples/args.test.py
launch_test examples/args.test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc (source found in the example_processes folder). good_proc is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

0.9.7 (2020-12-04)

  • [Eloquent backport] stop using constructors deprecated in pytest 5.4 (#391) Switch to from_parent (partial #421) avoid deprecation warning, use from_parent (#402) (#459)
  • Contributors: Shane Loretz

0.9.6 (2020-01-21)

  • Import test file without contaminating sys.modules (#360)
  • Contributors: Peter Baughman

0.9.5 (2019-11-13)

  • Make launch_testing.markers.retry_on_failure decorator more robust. (#352)
  • Contributors: Michel Hidalgo

0.9.4 (2019-11-08)

  • Fix a small typo in the launch_testing README. (#351)
  • Contributors: Chris Lalancette

0.9.3 (2019-10-23)

0.9.2 (2019-10-23)

  • Support launch test reruns when using pytest (#348)
  • Support CLI commands testing (#279)
  • Contributors: Michel Hidalgo

0.9.1 (2019-09-25)

  • Optionally remove ready fn arg from generate_test_description (#322)
  • Contributors: Michel Hidalgo, Peter Baughman

0.9.0 (2019-09-18)

  • install package manifest (#330)
  • Unindent setup.cfg options. (#326)
  • Use renamed remove_ansi_escape_sequences. (#302)0
  • Enable launch test discovery in pytest (#312)
  • Support LaunchService injection into pre-shutdown tests. (#308)
  • Add assertWaitForStartup method to match assertWaitForShutdown (#278)
  • Fix a simple typo in an error message. (#301)
  • Fix launch_testing output filtering (#296)
  • Revert \"Revert \"[execute_process] emulate_tty configurable and defaults to true\"\" (#277)
  • Fix formatting (#262)
  • Fix proc lookup for processes with multiple command-line arguments (#229)
  • Remove ros domain ID dependency (#256)
  • Contributors: Chris Lalancette, Dirk Thomas, Esteve Fernandez, Michel Hidalgo, Peter Baughman, William Woodall, ivanpauno

0.8.3 (2019-05-29)

  • Changed behavior to use --isolated if no ROS_DOMAIN_ID is set to help parallel testing. (#251)
  • Contributors: Peter Baughman

0.8.2 (2019-05-20)

  • add non-asserting waitFor method (#243)
  • Enable reuse of launch testing functionality (#236)
  • Stop randomizing ROS_DOMAIN_ID by default in launch tests (#240)
  • Contributors: Dirk Thomas, Michel Hidalgo

0.8.1 (2019-05-08)

0.8.0 (2019-04-13)

  • Added test actions. (#178)
  • Fixed test_env_testing test (#200)
  • Dropped legacy launch package. (#191)
  • Migrated legacy launch API tests. (#167)
  • Contributors: Dirk Thomas, Michel Hidalgo, ivanpauno

0.7.3 (2018-12-13)

0.7.2 (2018-12-06)

0.7.1 (2018-11-16)

  • Fixed setup.py versions (#155)
  • Contributors: Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.8.7
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version dashing
Last Updated 2020-03-25
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Pete Baughman

Authors

  • Pete Baughman
  • Dirk Thomas
  • Esteve Fernandez

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc.test.py.

Run the example by doing:

launch_test launch_testing/examples/good_proc.test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestNodeOutput class.

The Launch Description

def generate_test_description(ready_fn):

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example
        launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested. It should also call the ready_fn that is passed in to signal when the tests should start. In the good_proc.test.py example, there is no need to delay the start of the tests so the ready_fn is called concurrently when the launching of the process under test.

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

-proc:

Either the process name as a string, or a `launch.actions.ExecuteProcess` object that was used to start the process.
Pass `None` or an empty string to search all processes.
  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The proc, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, proc, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • proc and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, proc=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, proc, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(proc, cmd_args=None, *, timeout=10)

  • proc and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args examples/args.test.py
launch_test examples/args.test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

ROS_DOMAIN_ID Isolation

If the ROS_DOMAIN_ID environment variable isn't set, launch_test will automatically coordinate with other launch_test processes running on the same host to use a unique ROS_DOMAIN_ID for the launched processes. This allows multiple instances to run in parallel (the default with colcon test). Note that launch_test cannot coordinate unique domains across multiple hosts.
If the ROS_DOMAIN_ID environment variable is already set, launch_test respects the environment variable and won't attempt to select a different ID. In this case it's the responsibility of the user to design tests that can be safely run in parallel, or not use parallel test workers.

When working on a system without a ROS_DOMAIN_ID set, the automatic domain isolation behavior can be disabled with the --disable-isolation flag. This can be useful for debugging tests by running without isolation and running a command like ros2 topic echo in another terminal window to see what's happening in the test as it runs.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

CHANGELOG

Changelog for package launch_testing

0.8.7 (2019-10-11)

0.8.6 (2019-09-06)

  • Address additional syntax issues with Python 3.5. (#328)
  • Restore support for Python 3.5 (#324)
  • Contributors: Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
fmi_adapter github-boschresearch-fmi_adapter
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
rosauth github-GT-RAIL-rosauth
rosbridge_server github-RobotWebTools-rosbridge_suite
slam_toolbox github-SteveMacenski-slam_toolbox
tts github-aws-robotics-tts-ros2
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
nav2_amcl github-ros-planning-navigation2
nav2_behavior_tree github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_bt_navigator github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
costmap_queue github-ros-planning-navigation2
dwb_controller github-ros-planning-navigation2
dwb_core github-ros-planning-navigation2
dwb_critics github-ros-planning-navigation2
dwb_msgs github-ros-planning-navigation2
dwb_plugins github-ros-planning-navigation2
nav2_dwb_controller github-ros-planning-navigation2
nav_2d_msgs github-ros-planning-navigation2
nav_2d_utils github-ros-planning-navigation2
nav2_dynamic_params github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_msgs github-ros-planning-navigation2
nav2_navfn_planner github-ros-planning-navigation2
nav2_recoveries github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
nav2_util github-ros-planning-navigation2
nav2_voxel_grid github-ros-planning-navigation2
nav2_world_model github-ros-planning-navigation2
navigation2 github-ros-planning-navigation2
rcl github-ros2-rcl
rcl_logging_log4cxx github-ros2-rcl_logging
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.17.2
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version galactic
Last Updated 2022-12-02
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • William Woodall
  • Michel Hidalgo

Authors

  • Pete Baughman
  • Dirk Thomas
  • Esteve Fernandez

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc_launch_test.py.

Run the example by doing:

launch_test test/launch_testing/examples/good_proc_launch_test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestProcessOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

In older style tests, a function called ready_fn is declared as an argument to generate_test_description and must be plumbed into the launch description with an OpaqueFunction. This method has been fully replaced by the ReadyToTest action and is therefore deprecated.

def generate_test_description(ready_fn):

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example
        launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
    ])

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, process, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • process:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], process, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The process, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, process, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • process and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, process=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, process, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(process, cmd_args=None, *, timeout=10)

  • process and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args test/launch_testing/examples/args_launch_test.py
launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc (source found in the example_processes folder). good_proc is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

0.17.2 (2022-12-02)

  • Inherit markers from generate_test_description (#675)
  • Fix Typo (#643)
  • Add compatitibility with pytest 7 (#629)
  • Mention that ready_fn in generate_test_description is deprecated (#623)
  • Switch to using a comprehension for process_names. (#616)
  • Contributors: Bi0T1N, Chris Lalancette, Kenji Brameld, Scott K Logan, Shane Loretz

0.17.1 (2022-04-28)

0.17.0 (2021-04-06)

0.16.0 (2021-03-19)

  • Use unittest.mock instead of mock (#487)
  • Contributors: Michel Hidalgo

0.15.0 (2021-01-25)

0.14.0 (2020-12-08)

0.13.0 (2020-11-04)

  • Update package maintainers (#465)
  • Contributors: Michel Hidalgo

0.12.0 (2020-08-18)

0.11.1 (2020-08-14)

0.11.0 (2020-08-04)

  • Disable cleanup of test cases once they have been run (#406)
  • Fix max() with empty sequence (#440)
  • Use unittest.TestCase.id() for pytest failure reprs. (#436)
  • Use unittest.TestCase.id() to put together jUnit XML output. (#435)
  • Claim ownership (#433)
  • Contributors: Dirk Thomas, Michel Hidalgo, Scott K Logan, William Woodall

0.10.2 (2020-05-26)

  • Set junit_family to xunit2 in pytest.ini
  • Stop using implicit variables in example testing.
  • Switch to from_parent to remove deprecation warning.
  • Fix new flake8 errors. (#420)
  • Remove uses of deprecated ready_fn. (#419)
  • Contributors: Chris Lalancette, Michel Hidalgo

0.10.1 (2020-05-08)

  • fixed depcrecation warning of imp to importlib (issue #387) (#407)
  • wait_for_ouput() repr includes actual text (#408)
  • Contributors: Shane Loretz, Zahi Kakish

0.10.0 (2020-04-24)

  • Improve jUnit output for launch tests when run with py.test (#404)
  • avoid deprecation warning, use from_parent (#402)
  • Warn that old-style ready_fn and test attributes will be deprecated (#346)
  • more verbose test_flake8 error messages (same as ros2/launch_ros#135)
  • stop using constructors deprecated in pytest 5.4 (#391)
  • Add the ability to assert in stdout or stderr. (#378)
  • Add delay parameter to retry_on_failure decorator (#390)
  • Make RegisterEventHandler describe its sub-entities (#386)
  • Import test file without contaminating sys.modules (#360)
  • Update reference to example launch test file (#363)
  • Use imperative mood in docstrings. (#362)
  • Fix a documentation typo. (#361)
  • Fix junit XML when launch dies early (#358)
  • Contributors: Chris Lalancette, Dan Rose, Dirk Thomas, Jacob Perron, Michel Hidalgo, Peter Baughman, Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
async_web_server_cpp github-fkie-async_web_server_cpp
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
rosbridge_server github-RobotWebTools-rosbridge_suite
slam_toolbox github-SteveMacenski-slam_toolbox
joy_teleop github-ros-teleop-teleop_tools
twist_mux github-ros-teleop-twist_mux
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
dolly_tests github-chapulina-dolly
kobuki_velocity_smoother github-kobuki-base-kobuki_velocity_smoother
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
nav2_amcl github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_core github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
rcl github-ros2-rcl
rcl_logging_log4cxx github-ros2-rcl_logging
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
joint_limits_interface github-ros-controls-ros2_control
plansys2_bringup github-IntelligentRoboticsLabs-ros2_planning_system
ros2param github-ros2-ros2cli
ros_ign_bridge github-ignitionrobotics-ros_ign
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing
ros2bag github-ros2-rosbag2
test_cli_remapping github-ros2-system_tests
test_communication github-ros2-system_tests
test_rclcpp github-ros2-system_tests
test_security github-ros2-system_tests
ros_core github-ros2-variants
webots_ros2_tests github-cyberbotics-webots_ros2
domain_bridge github-ros2-domain_bridge
ros2launch_security_examples github-osrf-ros2launch_security

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.10.10
License Apache License 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ros2/launch.git
VCS Type git
VCS Version foxy
Last Updated 2022-10-12
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

A package to create tests which involve launch files and multiple processes.

Additional Links

No additional links.

Maintainers

  • Pete Baughman

Authors

  • Pete Baughman
  • Dirk Thomas
  • Esteve Fernandez

launch_testing

This tool is a framework for launch integration testing. For example:

  • The exit codes of all processes are available to the tests.
  • Tests can check that all processes shut down normally, or with specific exit codes.
  • Tests can fail when a process dies unexpectedly.
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Quick start example

Start with the launch_testing example good_proc_launch_test.py.

Run the example by doing:

launch_test test/launch_testing/examples/good_proc_launch_test.py

launch_test will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestProcessOutput class.

The Launch Description

def generate_test_description():

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example.
        # In a more complicated launch description, we might want this action happen
        # once some process starts or once some other event happens
        launch_testing.actions.ReadyToTest()
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested.

The launch description needs to include a ReadyToTest action to signal to the test framework that it's safe to start the active tests.

In the above example, there is no need to delay the start of the tests so the ReadyToTest action is a peer to the process under test and will signal to the framework that it's safe to start around the same time the ExecuteProcess action is run.

In older style tests, a function called ready_fn is declared as an argument to generate_test_description and must be plumbed into the launch description with an OpaqueFunction. This method has been fully replaced by the ReadyToTest action and is therefore deprecated.

def generate_test_description(ready_fn):

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example
        launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
    ])

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way.

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes.

Exit Codes and Standard Out

The launch_testing framework automatically adds some member fields to each test case so that the tests can access process output and exit codes.

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output.

Assertions

The launch_testing framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

launch_testing.asserts.assertInStdout(proc_output, msg, process, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message is found in the stdout of a particular process.

  • msg:

    The text to look for in the process standard out

  • process:

    Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes.

  • cmd_args:

    When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name.

  • strict_proc_matching:

    When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting.

launch_testing.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], process, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code.

  • allowable_exit_codes:

    A list of allowable exit codes. By default EXIT_OK (0) plus EXIT_FORCED (1) on Windows. Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139).

  • The process, cmd_args, and strict_proc_matching arguments behave the same way as in assertInStdout. By default, assert on the exit codes of all processes.

launch_testing.asserts.assertSequentialStdout(proc_output, process, cmd_args=None)

Asserts that standard out was seen in a particular order.

  • process and cmd_args:

    These arguments are the same as in assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes.

Returns a context manager that will check that a series of assertions happen in order.

As an example, consider:

with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects.

proc_output.assertWaitFor(msg, process=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • msg, process, cmd_args, and strict_proc_matching:

    These arguments work the same as in other assert methods. By default, this method waits on output from any process.

  • timeout:

    The amount of time to wait before raising an AssertionError.

proc_info.assertWaitForShutdown(process, cmd_args=None, *, timeout=10)

  • process and cmd_args:

    These arguments work the same as in other assertions, but it is not possible to wait on multiple processes to shut down.

  • timeout:

    The amount of time to wait before raising an AssertionError

Arguments

launch_test uses launch arguments for tests too.

Arguments are declared in the launch description and can be accessed by the test via a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

launch_test --show-args test/launch_testing/examples/args_launch_test.py
launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=value

See the launch_testing example with arguments for further reference.

Using CMake

To run launch tests from a CMakeLists.txt file, you'll need to declare a dependency on launch_testing_ament_cmake in your package.xml.

Then, in the CMakeLists.txt file, add:

find_package(launch_testing_ament_cmake)
add_launch_test(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_launch_test(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

Examples

good_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/good_proc_launch_test.py

This test checks a process called good_proc (source found in the example_processes folder). good_proc is a simple python process that prints "Loop 1, Loop2, etc. every second until it's terminated with ctrl+c. The test will launch the process, wait for a few loops to complete by monitoring stdout, then terminate the process and run some post-shutdown checks.

The pre-shutdown tests check that "Loop 1, Loop 2, Loop 3, Loop 4" are all printed to stdout. Once this test finishes, the process under test is shut down

After shutdown, we run a similar test that checks more output, and also checks the order of the output. test_out_of_order demonstrates that the assertSequentialStdout context manager is able to detect out of order stdout.

terminating_proc_launch_test.py

Usage:

launch_test test/launch_testing/examples/terminating_proc_launch_test.py

This test checks proper functionality of the terminating_proc example (source found in the example_processes folder).

args_launch_test.py

Usage to view the arguments:

launch_test test/launch_testing/examples/args_launch_test.py --show-args

Usage to run the test:

launch_test test/launch_testing/examples/args_launch_test.py dut_arg:=hey

This example shows how to pass arguments into a launch test. The arguments are made available in the launch description via a launch.substitutions.LaunchConfiguration. The arguments are made available to the test cases via a self.test_args dictionary

This example will fail if no arguments are passed.

context_launch_test.py

Usage:

launch_test test/launch_testing/examples/context_launch_test.py

This example shows how the generate_test_description function can return a tuple where the second item is a dictionary of objects that will be injected into the individual test cases. Tests that wish to use elements of the test context can add arguments with names matching the keys of the dictionary.

CHANGELOG

Changelog for package launch_testing

0.10.10 (2022-10-12)

0.10.9 (2022-09-12)

  • Fix Typo (#641) (#642)
  • Add compatitibility with pytest 7 (#592) (#628)
  • Mention that ready_fn in generate_test_description is deprecated (#623) (#624)
  • Switch to using a comprehension for process_names (#614) (#617)
  • Contributors: Bi0T1N, Chris Lalancette, Kenji Brameld, Shane Loretz

0.10.8 (2022-01-12)

  • Fix launch_testing README.md: Changed \'proc\' keyword to \'process\'. (#554)
  • Contributors: Aditya, Michael McConnell

0.10.7 (2021-11-12)

0.10.6 (2021-08-31)

0.10.5 (2021-04-14)

  • Fix max() with empty sequence (#440) (#491)
  • Contributors: Dirk Thomas, Michel Hidalgo

0.10.4 (2020-12-08)

0.10.3 (2020-08-27)

0.10.2 (2020-05-26)

  • Set junit_family to xunit2 in pytest.ini
  • Stop using implicit variables in example testing.
  • Switch to from_parent to remove deprecation warning.
  • Fix new flake8 errors. (#420)
  • Remove uses of deprecated ready_fn. (#419)
  • Contributors: Chris Lalancette, Michel Hidalgo

0.10.1 (2020-05-08)

  • fixed depcrecation warning of imp to importlib (issue #387) (#407)
  • wait_for_ouput() repr includes actual text (#408)
  • Contributors: Shane Loretz, Zahi Kakish

0.10.0 (2020-04-24)

  • Improve jUnit output for launch tests when run with py.test (#404)
  • avoid deprecation warning, use from_parent (#402)
  • Warn that old-style ready_fn and test attributes will be deprecated (#346)
  • more verbose test_flake8 error messages (same as ros2/launch_ros#135)
  • stop using constructors deprecated in pytest 5.4 (#391)
  • Add the ability to assert in stdout or stderr. (#378)
  • Add delay parameter to retry_on_failure decorator (#390)
  • Make RegisterEventHandler describe its sub-entities (#386)
  • Import test file without contaminating sys.modules (#360)
  • Update reference to example launch test file (#363)
  • Use imperative mood in docstrings. (#362)
  • Fix a documentation typo. (#361)
  • Fix junit XML when launch dies early (#358)
  • Contributors: Chris Lalancette, Dan Rose, Dirk Thomas, Jacob Perron, Michel Hidalgo, Peter Baughman, Steven! Ragnar

Wiki Tutorials

See ROS Wiki Tutorials for more details.

Source Tutorials

Not currently indexed.

Dependant Packages

Name Repo Deps
async_web_server_cpp github-fkie-async_web_server_cpp
diagnostic_updater github-ros-diagnostics
fmi_adapter github-boschresearch-fmi_adapter
stereo_image_proc github-ros-perception-image_pipeline
joint_state_publisher github-ros-joint_state_publisher
swri_transform_util github-swri-robotics-marti_common
novatel_oem7_driver github-novatel-novatel_oem7_driver
psen_scan_v2 github-PilzDE-psen_scan_v2
rosauth github-GT-RAIL-rosauth
rosbridge_server github-RobotWebTools-rosbridge_suite
slam_toolbox github-SteveMacenski-slam_toolbox
joy_teleop github-ros-teleop-teleop_tools
twist_mux github-ros-teleop-twist_mux
composition github-ros2-demos
demo_nodes_cpp github-ros2-demos
demo_nodes_cpp_native github-ros2-demos
image_tools github-ros2-demos
intra_process_demo github-ros2-demos
logging_demo github-ros2-demos
pendulum_control github-ros2-demos
quality_of_service_demo_cpp github-ros2-demos
kobuki_velocity_smoother github-kobuki-base-kobuki_velocity_smoother
launch_testing_ament_cmake github-ros2-launch
test_launch_testing github-ros2-launch
launch_testing_ros github-ros2-launch_ros
nav2_amcl github-ros-planning-navigation2
nav2_bringup github-ros-planning-navigation2
nav2_core github-ros-planning-navigation2
nav2_costmap_2d github-ros-planning-navigation2
nav2_map_server github-ros-planning-navigation2
nav2_system_tests github-ros-planning-navigation2
openzen_driver bitbucket-lpresearch-openzenros2
rcl github-ros2-rcl
rcl_logging_log4cxx github-ros2-rcl_logging
rcl_logging_noop github-ros2-rcl_logging
rclc github-ros2-rclc
rclcpp_components github-ros2-rclcpp
rcutils github-ros2-rcutils
ros1_bridge github-ros2-ros1_bridge
joint_limits_interface github-ros-controls-ros2_control
plansys2_bringup github-IntelligentRoboticsLabs-ros2_planning_system
ros2param github-ros2-ros2cli
ros_ign_bridge github-ignitionrobotics-ros_ign
ros2test github-ros2-ros_testing
ros_testing github-ros2-ros_testing
ros2bag github-ros2-rosbag2
test_cli_remapping github-ros2-system_tests
test_communication github-ros2-system_tests
test_rclcpp github-ros2-system_tests
test_security github-ros2-system_tests
ros_core github-ros2-variants
webots_ros2_tests github-cyberbotics-webots_ros2
xacro_live github-orise-robotics-xacro_live

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged launch_testing at Robotics Stack Exchange

No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.