-
 

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 2.3.0
License Apache 2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://gitlab.com/ApexAI/performance_test.git
VCS Type git
VCS Version master
Last Updated 2024-09-24
Dev Status MAINTAINED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

Apex.AI performance_test runner, plotter, and reporter

Additional Links

No additional links.

Maintainers

  • Apex AI, Inc.

Authors

No additional authors.

performance_report

[TOC]

This package serves two purposes:

  1. Run multiple performance_test experiments
  2. Visualize the combined results of those experiments

Quick start

Install the required dependencies:

python3 -m pip install -r third_party/python/requirements.txt
sudo apt-get install firefox-geckodriver

Note: all the commands below are run from the colcon workspace where performance_test/performance_report is installed:

# Build performance_test and performance_report
colcon build

# Set up the environment
source install/setup.bash

# Run perf_test for each experiment in the yaml file
ros2 run performance_report runner \
  --log-dir perf_logs \
  --test-name experiments \
  --configs src/performance_test/performance_report/cfg/runner/run_one_experiment.yaml

# The runner generates log files to the specified directory: `./perf_logs/experiements/`

# Generate the plots configured in the specified yaml file
ros2 run performance_report plotter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/plotter/plot_one_experiment.yaml

# The generated plots will be saved in `./perf_logs`

# Generate the reports configured in the specified yaml file
ros2 run performance_report reporter \
  --log-dir perf_logs \
  --configs src/performance_test/performance_report/cfg/reporter/report_one_experiment.yaml

runner

The performance_report runner tool is a wrapper around performance_test perf_test. It executes one or more perf_test experiments defined in a yaml file:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription  # or rclcpp-single-threaded-executor for ROS 2
    msg: Array1k
    rate: 20
  -
    com_mean: ApexOSPollingSubscription
    msg: Array4k
    rate: 20

To run all experiments in the config file, only a single command is required:

ros2 run performance_report runner \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

runner will invoke perf_test for each experiment, in sequence. The results for each experiment will be stored in a json log file in the directory output/path/to/log/files/custom_name_for_this_set_of_tests/.

For a list of all experiment configuration options, and their default values, see any of the example yaml configuration files in cfg/runner.

runner will by default skip any experiments that already have log files generated in the output directory. This can be overridden by adding -f or --force to the command.

Reducing duplication in configuration files

All of the experiment values can be a single value or an array:

---
experiments:
  -
    com_mean: ApexOSPollingSubscription
    msg:
      - Array1k
      - Array4k
      - Array16k
    pubs: 1
    subs: 1
    rate:
      - 20
      - 500
    reliability:
      - RELIABLE
      - BEST_EFFORT
    durability:
      - VOLATILE
      - TRANSIENT_LOCAL
    history: KEEP_LAST
    history_depth: 16

For this configuration file, runner would run all combinations, for a total of 24 experiments.

YAML aliases and anchors are also a great way to reduce duplication:

---
comparison_experiments_common: &comparison_experiments_common
  com_mean: ApexOSPollingSubscription
  msg:
    - Array1k
    - Array4k
    - Array16k
    - Array64k
    - Array256k
    - Array1m
    - Array4m
  rate: 20

inter_thread_copy: &inter_thread_copy
  process_configuration: INTRA_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_copy: &inter_process_copy
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: BY_COPY

inter_process_loaned: &inter_process_loaned
  process_configuration: INTER_PROCESS
  execution_strategy: INTER_THREAD
  sample_transport: LOANED_SAMPLES

experiments:
  -
    <<: *comparison_experiments_common
    <<: *inter_thread_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_copy
  -
    <<: *comparison_experiments_common
    <<: *inter_process_loaned

commander

commander generates the perf_test commands that would be invoked by runner, but does not actually run them:

ros2 run performance_report commander \
  --configs input/path/to/config.yaml \
  --log-dir output/path/to/log/files \
  --test-name custom_name_for_this_set_of_tests

The result (written to stdout) is a set of commands for invoking perf_test directly, for all of the experiments in the configuration file. The output can be inspected manually, or invoked:

ros2 run performance_report commander ...args... > perf_test_commands.sh
chmod +x perf_test_commands.sh
./perf_test_commands.sh

After invoking the generated script, the result is the same as if runner were used originally.

plotter

After experiments are complete, plotter can generate static images of plots from the resulting data:

ros2 run performance_report plotter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The plotter configuration files are easiest to explain through example. Example yaml configuration files can be found in cfg/plotter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above.

reporter

While plotter can generate static images, reporter uses Jinja templates to create a markdown or html report containing interactible bokeh plots:

ros2 run performance_report reporter \
  --configs input/path/to/config.yaml \
  --log-dir input/path/to/log/files

The reporter configuration files are very similar to those for plotter, and also are easiest to explain through example. Example yaml configuration files can be found in cfg/reporter. Each is intended to be used with one of the example runner configurations, as shown in the Quick start instructions above. Also see the example .md and .html template files, from which the output reports are generated.

Running the same experiments on multiple platforms

Suppose you want to run an experiment on multiple platforms, then combine the results into a single report. First, pass the --test-name arg to runner, to differentiate the result sets:

# on platform 1:
ros2 run performance_report runner --test_name platform1 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform1/

# on platform 2:
ros2 run performance_report runner --test_name platform2 -l log_dir -c run.yaml
# results will be stored in ./log_dir/platform2/

You can then combine these results into a single log_dir, on the platform where you will run plotter or reporter. Then, in your plotter or reporter configuration file, set test_name in each dataset, to select results from that platform’s result set:

# report.yaml
datasets:
  dataset_p1:
    test_name: platform1  # this matches the --test-name passed to runner
    # other fields...
  dataset_p2:
    test_name: platform2  # this matches the --test-name passed to runner
    # other fields...
reports:
  # ...

ros2 run performance_report reporter -l log_dir -c report.yaml

Notes

  • Currently, this tool is intended for ROS 2 with rmw_cyclone_dds, or Apex.OS with Apex.Middleware. It has not been tested with any other transport.
  • If the run configuration includes SHMEM or ZERO_COPY transport, then a file for configuring the middleware will be created to enable the shared memory transfer.
    • You must start RouDi before running the experiments. This tool will not automatically start it for you.
CHANGELOG

Changelog for package performance_report

X.Y.Z (YYYY/MM/DD)

2.3.0 (2024/09/24)

2.2.0 (2024/05/15)

Changed

  • Plugins are now responsible for enabling shared memory transfer, so runner and commander will no longer set the related runtime flags (e.g. CYCLONEDDS_URI)

    Fixed

  • For categorical plots, coerce the x_range to a string

2.1.0 (2024/04/17)

2.0.0 (2024/03/19)

Removed

  • Removed the special handling for the BoundedSequenceFlat messages, because the messages are removed in performance_test

1.5.2 (2024/01/24)

Fixed

  • Elegantly handle a failure to parse JSON log files

1.5.0 (2023/06/14)

Added

  • The reporter box-and-whisker latency plots now support latency_mean_ms for the y-axis, in addition to the previously-supported latency_mean
  • Added a new option prevent_cpu_idle (bool) for experiment configurations, which corresponds to the --prevent-cpu-idle switch in perf_test

    Changed

  • Update the README to better explain the purpose and usage of runner, commander, plotter, and reporter

1.4.2 (2023/03/15)

1.4.1 (2023/02/23)

1.4.0 (2023/02/20)

Added

  • Figures have a new x_range option: ru_maxrss_mb

    Changed

  • BoundedSequenceFlatXYZ will be mapped to BoundedSequenceXYZ for categorical plots, so that both message types can be compared directly on a single plot

1.3.7 (2023/01/04)

Added

  • The reporter templates can access os environment variables:
    • {{ env['SOME_ENVIRONMENT_VARIABLE'] }}
  • For error detection, the exit code for performance_report reporter is the number of missing datasets

Fixed

1.3.6 (2023/01/03)

1.3.5 (2022/12/05)

1.3.4 (2022/11/28)

1.3.3 (2022/11/28)

Fixed

  • Do not try to create a box-and-whisker for a file that contains no measurements

1.3.2 (2022/11/21)

1.3.1 (2022/11/21)

1.3.0 (2022/08/25)

Added

  • The reporter configuration supports box-and-whisker latency plots:
    • set the x_range to Experiment
    • set the y_range to latency_mean
    • set datasets to one or more datasets, each containing a single experiment
    • an example can be found in cfg/reporter/report_many_experiments.yaml

      Changed

  • Expanded the transport setting into the following two settings:
    • process_configuration:
      • INTRA_PROCESS
      • INTER_PROCESS
    • sample_transport:
      • BY_COPY
      • SHARED_MEMORY
      • LOANED_SAMPLES

1.2.1 (2022/06/30)

1.2.0 (2022/06/28)

Changed

  • In the reporter configuration, the template_name value may be an array

1.1.2 (2022/06/08)

1.1.1 (2022/06/07)

Fixed

  • Bokeh line style can be specified in the plotter and reporter .yaml files

1.1.0 (2022/06/02)

Fixed

  • Fix the GBP builds by removing python3-bokeh-pip from package.xml

1.0.0 (2022/05/12)

Added

  • Shared memory experiments are now compatible with both Apex.Middleware and rmw_cyclonedds_cpp
  • commander tool to emit the commands for running the experiments, instead of running them directly

    Changed

  • Use the new perf_test CLI args for QOS settings instead of old flags

    Deprecated

    Removed

    Fixed

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged performance_report at Robotics Stack Exchange

No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.