Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version humble
Last Updated 2024-12-05
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.8.3
robot_calibration_msgs 0.8.3

README

Robot Calibration

This package offers several ROS2 nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for gyro and track width parameters by rotating the robot in place and tracking the actual rotation based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the position of each point of the checkerboard corners.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection throuhg each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration

Configuration is typically handled through two sets of YAML files. The first YAML file specifies the details needed for data capture:

  • chains - The kinematic chains of the robot which should be controlled, and how to control them so that we can move the robot to each desired pose for sampling.
  • features - The configuration for the various “feature finders” that will be making our observations at each sample pose. Current finders include an LED detector, checkerboard finder, and plane finder. Feature finders are plugin-based, so you can create your own.

The second configuration file specifies the configuration for optimization. This specifies several items:

  • base_link - Frame used for internal calculations. Typically, the root of the URDF is used. Often base_link.
  • calibration_steps - In ROS2, multistep calibration is fully supported. The parameter “calibration_steps” should be a list of step names. A majority of calibrations probably only use a single step, but the step name must still be in a YAML list format.

For each calibration step, there are several parameters:

  • models - Models define how to reproject points. The basic model is a kinematic chain. Additional models can reproject through a kinematic chain and then a sensor, such as a 3d camera. For IK chains, frame parameter is the tip of the IK chain. The “models” parameter is a list of model names.
  • free_params - Defines the names of single-value free parameters. These can be the names of a joint for which the joint offset should be calculated, camera parameters such as focal lengths or the driver offsets for Primesense devices. If attempting to calibrate the length of a robot link, use free_frames to define the axis that is being calibrated.
  • free_frames - Defines the names of multi-valued free parameters that are 6-d transforms. Also defines which axis are free. X, Y, and Z can all be independently set to free parameters. Roll, pitch and yaw can also be set free, however it is important to note that because calibration internally uses an angle-axis representation, either all 3 should be set free, or only one should be free. You should never set two out of three to be free parameters.
  • free_frames_initial_values - Defines the initial values for free_frames. X, Y, Z offsets are in meters. ROLL, PITCH, YAW are in radians. This is most frequently used for setting the initial estimate of the checkerboard position, see details below.
  • error_blocks - List of error block names, which are then defined under their own namespaces.

For each model, the type must be specified. The type should be one of:

  • chain3d - Represents a kinematic chain from the base_link to the frame parameter (which in MoveIt/KDL terms is usually referred to as the tip).
  • camera3d - Represents a kinematic chain from the base_link to the frame parameter, and includes the pinhole camera model parameters (cx, cy, fx, fy) when doing projection of the points. This model only works if your sensor publishes CameraInfo. Further, the calibration obtained when this model is

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros2
Last Updated 2025-03-17
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.10.0
robot_calibration_msgs 0.10.0

README

Robot Calibration

This package offers several ROS2 nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for wheel diameter, track width and gyro gain by moving and rotating the robot while tracking the actual movement based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual checkerboard frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the ideal position of each point of the checkerboard corners since the checkerboard is of known size.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection throuhg each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration is typically handled through two sets of YAML files: usually called capture.yaml and calibrate.yaml.

If you want to manually move the robot to poses and capture each time you hit ENTER on the keyboard, you can run robot calibration with:

ros2 run robot_calibration calibrate --manual --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

More commonly, you will generate a third YAML file with the capture pose configuration (as documented below in the section “Calibration Poses”):

ros2 run robot_calibration calibrate path-to-calibration-poses.yaml --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

This is often wrapped into a ROS 2 launch file, which often records a bagfile of the observations allowing to re-run just the calibration part instead of needing to run capture each time. For an example, see the UBR-1 example in the next section.

Example Configuration

All of the parameters that can be defined in the capture and calibrate steps are documented below, but sometimes it is just nice to have a full example. The UBR-1 robot uses this package to calibrate in ROS2. Start with the calibrate_launch.py in ubr1_calibration.

Capture Configuration

The capture.yaml file specifies the details needed for data capture:

  • chains - A parameter listing the names of the kinematic chains of the robot which should be controlled.
  • features - A parameter listing the names of the various “feature finders” that will be making our observations at each sample pose.

Each of these chains and features is then defined by a parameter block of the same name, for example:

```yaml robot_calibration: ros_parameters: # List of chains chains: - arm # List of features features: - checkerboard_finder # Parameter block to define the arm chain arm: topic: /arm_controller/follow_joint_trajectory joints:

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros2
Last Updated 2025-03-17
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.10.0
robot_calibration_msgs 0.10.0

README

Robot Calibration

This package offers several ROS2 nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for wheel diameter, track width and gyro gain by moving and rotating the robot while tracking the actual movement based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual checkerboard frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the ideal position of each point of the checkerboard corners since the checkerboard is of known size.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection throuhg each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration is typically handled through two sets of YAML files: usually called capture.yaml and calibrate.yaml.

If you want to manually move the robot to poses and capture each time you hit ENTER on the keyboard, you can run robot calibration with:

ros2 run robot_calibration calibrate --manual --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

More commonly, you will generate a third YAML file with the capture pose configuration (as documented below in the section “Calibration Poses”):

ros2 run robot_calibration calibrate path-to-calibration-poses.yaml --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

This is often wrapped into a ROS 2 launch file, which often records a bagfile of the observations allowing to re-run just the calibration part instead of needing to run capture each time. For an example, see the UBR-1 example in the next section.

Example Configuration

All of the parameters that can be defined in the capture and calibrate steps are documented below, but sometimes it is just nice to have a full example. The UBR-1 robot uses this package to calibrate in ROS2. Start with the calibrate_launch.py in ubr1_calibration.

Capture Configuration

The capture.yaml file specifies the details needed for data capture:

  • chains - A parameter listing the names of the kinematic chains of the robot which should be controlled.
  • features - A parameter listing the names of the various “feature finders” that will be making our observations at each sample pose.

Each of these chains and features is then defined by a parameter block of the same name, for example:

```yaml robot_calibration: ros_parameters: # List of chains chains: - arm # List of features features: - checkerboard_finder # Parameter block to define the arm chain arm: topic: /arm_controller/follow_joint_trajectory joints:

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros2
Last Updated 2025-03-17
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.10.0
robot_calibration_msgs 0.10.0

README

Robot Calibration

This package offers several ROS2 nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for wheel diameter, track width and gyro gain by moving and rotating the robot while tracking the actual movement based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual checkerboard frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the ideal position of each point of the checkerboard corners since the checkerboard is of known size.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection throuhg each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration is typically handled through two sets of YAML files: usually called capture.yaml and calibrate.yaml.

If you want to manually move the robot to poses and capture each time you hit ENTER on the keyboard, you can run robot calibration with:

ros2 run robot_calibration calibrate --manual --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

More commonly, you will generate a third YAML file with the capture pose configuration (as documented below in the section “Calibration Poses”):

ros2 run robot_calibration calibrate path-to-calibration-poses.yaml --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

This is often wrapped into a ROS 2 launch file, which often records a bagfile of the observations allowing to re-run just the calibration part instead of needing to run capture each time. For an example, see the UBR-1 example in the next section.

Example Configuration

All of the parameters that can be defined in the capture and calibrate steps are documented below, but sometimes it is just nice to have a full example. The UBR-1 robot uses this package to calibrate in ROS2. Start with the calibrate_launch.py in ubr1_calibration.

Capture Configuration

The capture.yaml file specifies the details needed for data capture:

  • chains - A parameter listing the names of the kinematic chains of the robot which should be controlled.
  • features - A parameter listing the names of the various “feature finders” that will be making our observations at each sample pose.

Each of these chains and features is then defined by a parameter block of the same name, for example:

```yaml robot_calibration: ros_parameters: # List of chains chains: - arm # List of features features: - checkerboard_finder # Parameter block to define the arm chain arm: topic: /arm_controller/follow_joint_trajectory joints:

File truncated at 100 lines see the full file

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros2
Last Updated 2025-03-17
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.10.0
robot_calibration_msgs 0.10.0

README

Robot Calibration

This package offers several ROS2 nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for wheel diameter, track width and gyro gain by moving and rotating the robot while tracking the actual movement based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual checkerboard frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the ideal position of each point of the checkerboard corners since the checkerboard is of known size.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection throuhg each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration is typically handled through two sets of YAML files: usually called capture.yaml and calibrate.yaml.

If you want to manually move the robot to poses and capture each time you hit ENTER on the keyboard, you can run robot calibration with:

ros2 run robot_calibration calibrate --manual --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

More commonly, you will generate a third YAML file with the capture pose configuration (as documented below in the section “Calibration Poses”):

ros2 run robot_calibration calibrate path-to-calibration-poses.yaml --ros-args --params-file path-to-capture.yaml --params-file path-to-calibrate.yaml

This is often wrapped into a ROS 2 launch file, which often records a bagfile of the observations allowing to re-run just the calibration part instead of needing to run capture each time. For an example, see the UBR-1 example in the next section.

Example Configuration

All of the parameters that can be defined in the capture and calibrate steps are documented below, but sometimes it is just nice to have a full example. The UBR-1 robot uses this package to calibrate in ROS2. Start with the calibrate_launch.py in ubr1_calibration.

Capture Configuration

The capture.yaml file specifies the details needed for data capture:

  • chains - A parameter listing the names of the kinematic chains of the robot which should be controlled.
  • features - A parameter listing the names of the various “feature finders” that will be making our observations at each sample pose.

Each of these chains and features is then defined by a parameter block of the same name, for example:

```yaml robot_calibration: ros_parameters: # List of chains chains: - arm # List of features features: - checkerboard_finder # Parameter block to define the arm chain arm: topic: /arm_controller/follow_joint_trajectory joints:

File truncated at 100 lines see the full file

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repo symbol

robot_calibration repository

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros1
Last Updated 2023-08-29
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.7.2
robot_calibration_msgs 0.7.2

README

Robot Calibration

This package offers ROS nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for gyro and track width parameters by rotating the robot in place and tracking the actual rotation based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the position of each point of the checkerboard corners.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection through each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration

Configuration is typically handled through two sets of YAML files. The first YAML file specifies the details needed for data capture:

  • chains - The kinematic chains of the robot which should be controlled, and how to control them so that we can move the robot to each desired pose for sampling.
  • feature_finders - The configuration for the various “feature finders” that will be making our observations at each sample pose. Current finders include an LED detector, checkerboard finder, and plane finder. Feature finders are plugin-based, so you can create your own.

The second configuration file specifies the configuration for optimization. This specifies several items:

  • base_link - Frame used for internal calculations. Typically, the root of the URDF is used. Often base_link.
  • models - Models define how to reproject points. The basic model is a kinematic chain. Additional models can reproject through a kinematic chain and then a sensor, such as a 3d camera. For IK chains, frame parameter is the tip of the IK chain.
    • chain - Represents a kinematic chain from the base_link to the frame parameter (which in MoveIt/KDL terms is usually referred to as the tip).
    • camera3d - Represents a kinematic chain from the base_link to the frame parameter, and includes the pinhole camera model parameters (cx, cy, fx, fy) when doing projection of the points. This model only works if your sensor publishes CameraInfo. Further, the calibration obtained when this model is used and any of the pinhole parameters are free parameters is only valid if the physical sensor actually uses the CameraInfo for 3d projection (this is generally true for the Primesense/Astra sensors).
  • free_params - Defines the names of single-value free parameters. These can be the names of a joint for which the joint offset should be calculated, camera parameters such as focal lengths, or other parameters, such as driver offsets for Primesense devices.
  • free_frames - Defines the names of multi-valued free parameters that are 6-d transforms. Also defines which axis are free. X, Y, and Z can all be independently set to free parameters. Roll, pitch and yaw can also be set free, however it is important to note that because calibration internally uses an angle-axis representation, either all 3 should be set free, or only one should be free. You should never set two out of three to be free parameters.
  • free_frames_initial_values - Defines the initial values for free_frames. X, Y, Z offsets are in meters. ROLL, PITCH, YAW are in radians. This is most frequently used for setting the initial estimate of the checkerboard position, see details below.
  • error_blocks - These define the actual errors to compare during optimization. There are several error blocks available at this time:
    • chain3d_to_chain3d - This error block can compute the difference in reprojection between two 3D “sensors” which tell us the position of certain features of interest. Sensors might be a 3D camera or an arm which is holding a checkerboard. Was previously called “camera3d_to_arm”.
    • chain3d_to_plane - This error block can compute the difference between projected 3d points and a desired plane. The most common use case is making sure that the ground plane a robot sees is really on the ground.
    • plane_to_plane - This error block is able to compute the difference

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros1
Last Updated 2023-08-29
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.7.2
robot_calibration_msgs 0.7.2

README

Robot Calibration

This package offers ROS nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for gyro and track width parameters by rotating the robot in place and tracking the actual rotation based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the position of each point of the checkerboard corners.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection through each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration

Configuration is typically handled through two sets of YAML files. The first YAML file specifies the details needed for data capture:

  • chains - The kinematic chains of the robot which should be controlled, and how to control them so that we can move the robot to each desired pose for sampling.
  • feature_finders - The configuration for the various “feature finders” that will be making our observations at each sample pose. Current finders include an LED detector, checkerboard finder, and plane finder. Feature finders are plugin-based, so you can create your own.

The second configuration file specifies the configuration for optimization. This specifies several items:

  • base_link - Frame used for internal calculations. Typically, the root of the URDF is used. Often base_link.
  • models - Models define how to reproject points. The basic model is a kinematic chain. Additional models can reproject through a kinematic chain and then a sensor, such as a 3d camera. For IK chains, frame parameter is the tip of the IK chain.
    • chain - Represents a kinematic chain from the base_link to the frame parameter (which in MoveIt/KDL terms is usually referred to as the tip).
    • camera3d - Represents a kinematic chain from the base_link to the frame parameter, and includes the pinhole camera model parameters (cx, cy, fx, fy) when doing projection of the points. This model only works if your sensor publishes CameraInfo. Further, the calibration obtained when this model is used and any of the pinhole parameters are free parameters is only valid if the physical sensor actually uses the CameraInfo for 3d projection (this is generally true for the Primesense/Astra sensors).
  • free_params - Defines the names of single-value free parameters. These can be the names of a joint for which the joint offset should be calculated, camera parameters such as focal lengths, or other parameters, such as driver offsets for Primesense devices.
  • free_frames - Defines the names of multi-valued free parameters that are 6-d transforms. Also defines which axis are free. X, Y, and Z can all be independently set to free parameters. Roll, pitch and yaw can also be set free, however it is important to note that because calibration internally uses an angle-axis representation, either all 3 should be set free, or only one should be free. You should never set two out of three to be free parameters.
  • free_frames_initial_values - Defines the initial values for free_frames. X, Y, Z offsets are in meters. ROLL, PITCH, YAW are in radians. This is most frequently used for setting the initial estimate of the checkerboard position, see details below.
  • error_blocks - These define the actual errors to compare during optimization. There are several error blocks available at this time:
    • chain3d_to_chain3d - This error block can compute the difference in reprojection between two 3D “sensors” which tell us the position of certain features of interest. Sensors might be a 3D camera or an arm which is holding a checkerboard. Was previously called “camera3d_to_arm”.
    • chain3d_to_plane - This error block can compute the difference between projected 3d points and a desired plane. The most common use case is making sure that the ground plane a robot sees is really on the ground.
    • plane_to_plane - This error block is able to compute the difference

File truncated at 100 lines see the full file

Repository Summary

Checkout URI https://github.com/mikeferguson/robot_calibration.git
VCS Type git
VCS Version ros1
Last Updated 2023-08-29
Dev Status DEVELOPED
CI status No Continuous Integration
Released RELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Packages

Name Version
robot_calibration 0.7.2
robot_calibration_msgs 0.7.2

README

Robot Calibration

This package offers ROS nodes. The primary one is called calibrate, and can be used to calibrate a number of parameters of a robot, such as:

  • 3D Camera intrinsics and extrinsics
  • Joint angle offsets
  • Robot frame offsets

These parameters are then inserted into an updated URDF, or updated camera configuration YAML in the case of camera intrinsics.

Two additional ROS nodes are used for mobile-base related parameter tuning:

  • base_calibration_node - can determine scaling factors for gyro and track width parameters by rotating the robot in place and tracking the actual rotation based on the laser scanner view of a wall.
  • magnetometer_calibration - can be used to do hard iron calibration of a magnetometer.

The calibrate node

Calibration works in two steps. The first step involves the capture of data samples from the robot. Each “sample” comprises the measured joint positions of the robot and two or more “observations”. An observation is a collection of points that have been detected by a “sensor”. For instance, a robot could use a camera and an arm to “detect” the pose of corners on a checkerboard. In the case of the camera sensor, the collection of points is simply the detected positions of each corner of the checkerboard, relative to the pose of the camera reference frame. For the arm, it is assumed that the checkerboard is fixed relative to a virtual frame which is fixed relative to the end effector of the arm. Within the virtual frame, we know the position of each point of the checkerboard corners.

The second step of calibration involves optimization of the robot parameters to minimize the errors. Errors are defined as the difference in the pose of the points based on reprojection through each sensor. In the case of our checkerboard above, the transform between the virtual frame and the end effector becomes additional free parameters. By estimating these parameters alongside the robot parameters, we can find a set of parameters such that the reprojection of the checkerboard corners through the arm is as closely aligned with the reprojection through the camera (and any associated kinematic chain, for instance, a pan/tilt head).

Configuration

Configuration is typically handled through two sets of YAML files. The first YAML file specifies the details needed for data capture:

  • chains - The kinematic chains of the robot which should be controlled, and how to control them so that we can move the robot to each desired pose for sampling.
  • feature_finders - The configuration for the various “feature finders” that will be making our observations at each sample pose. Current finders include an LED detector, checkerboard finder, and plane finder. Feature finders are plugin-based, so you can create your own.

The second configuration file specifies the configuration for optimization. This specifies several items:

  • base_link - Frame used for internal calculations. Typically, the root of the URDF is used. Often base_link.
  • models - Models define how to reproject points. The basic model is a kinematic chain. Additional models can reproject through a kinematic chain and then a sensor, such as a 3d camera. For IK chains, frame parameter is the tip of the IK chain.
    • chain - Represents a kinematic chain from the base_link to the frame parameter (which in MoveIt/KDL terms is usually referred to as the tip).
    • camera3d - Represents a kinematic chain from the base_link to the frame parameter, and includes the pinhole camera model parameters (cx, cy, fx, fy) when doing projection of the points. This model only works if your sensor publishes CameraInfo. Further, the calibration obtained when this model is used and any of the pinhole parameters are free parameters is only valid if the physical sensor actually uses the CameraInfo for 3d projection (this is generally true for the Primesense/Astra sensors).
  • free_params - Defines the names of single-value free parameters. These can be the names of a joint for which the joint offset should be calculated, camera parameters such as focal lengths, or other parameters, such as driver offsets for Primesense devices.
  • free_frames - Defines the names of multi-valued free parameters that are 6-d transforms. Also defines which axis are free. X, Y, and Z can all be independently set to free parameters. Roll, pitch and yaw can also be set free, however it is important to note that because calibration internally uses an angle-axis representation, either all 3 should be set free, or only one should be free. You should never set two out of three to be free parameters.
  • free_frames_initial_values - Defines the initial values for free_frames. X, Y, Z offsets are in meters. ROLL, PITCH, YAW are in radians. This is most frequently used for setting the initial estimate of the checkerboard position, see details below.
  • error_blocks - These define the actual errors to compare during optimization. There are several error blocks available at this time:
    • chain3d_to_chain3d - This error block can compute the difference in reprojection between two 3D “sensors” which tell us the position of certain features of interest. Sensors might be a 3D camera or an arm which is holding a checkerboard. Was previously called “camera3d_to_arm”.
    • chain3d_to_plane - This error block can compute the difference between projected 3d points and a desired plane. The most common use case is making sure that the ground plane a robot sees is really on the ground.
    • plane_to_plane - This error block is able to compute the difference

File truncated at 100 lines see the full file