![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
Messages
Services
Plugins
Recent questions tagged extrinsic_calibrator_core at Robotics Stack Exchange
![]() |
extrinsic_calibrator_core package from extrinsic_calibrator repoextrinsic_calibrator extrinsic_calibrator_core extrinsic_calibrator_examples |
ROS Distro
|
Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | AGPL-3.0-only |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/Ikerlan-KER/extrinsic_calibrator.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-17 |
Dev Status | MAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Josep Rueda Collell
Authors
extrinsic_calibrator_core
Overview
extrinsic_calibrator_core
is a ROS2 package designed to calibrate a set of cameras distributed throughout a room. The calibration is performed using ArUco markers scattered randomly in the environment. Each camera detects one or several ArUco markers within its field of view, and the algorithm reconstructs the positions of the markers to create a global map. The positions of the cameras are then computed and incorporated into this aforementioned map map.
The algorithm utilizes the OpenCV library to detect the markers and performs matrix transformations to compute the positions of both markers and cameras relative to each other.
Features
- Extrinsically calibrate any number of cameras simultaneously.
- Automatically build a global ArUco map using marker detection from multiple viewpoints.
- Configurable ArUco marker properties and camera topics.
- Includes an utility for generating printable ArUco markers.
Configuration
The package provides configuration options through YAML files.
ArUco Marker Parameters
You can customize the ArUco markers used in the calibration process by modifying the aruco_parameters.yaml
file.
aruco_params:
aruco_dict: # OpenCV marker dictionary
type: string
default_value: "DICT_6X6_250"
marker_length: # Length of the marker side in meters
type: double
default_value: 0.26
Camera Topics Parameters
You can specify the topics for each camera in the camera_topics_parameters.yaml
file. This setup is scalable to handle as many cameras as needed.
cameras_params:
cam1:
image_topic:
type: string
default_value: "/camera_1/image_raw"
camera_info_topic:
type: string
default_value: "/camera_1/camera_info"
cam2:
image_topic:
type: string
default_value: "/camera_2/image_raw"
camera_info_topic:
type: string
default_value: "/camera_2/camera_info"
# cam3:
# image_topic:
# type: string
# default_value: "/camera_3/image_raw"
# camera_info_topic:
# type: string
# default_value: "/camera_3/camera_info"
Usage
- Place the ArUco marker with ID 0 where any camera can see it. This marker will serve as the reference point. The system will consider this marker’s position as the origin (0,0,0) of the global coordinate system, called
map
.
- Distribute the remaining ArUco markers around the room, ensuring they’re visible to different cameras.
For best results:
- Try to have each camera see multiple markers.
- Aim for overlap, where multiple cameras can see the same markers.
- The more markers a camera can detect, and the more cameras that can see the same markers, the more accurate your calibration will be.
Remember, the system first builds a map of marker positions, then determines camera positions based on this map. So, having markers visible to multiple cameras helps create a more accurate and interconnected calibration.
cameras_paint
- Initiate the calibration process by running the
extrinsic_calibrator_node
.
- Wait for the algorithm to gather the transform of each marker from each camera. The algorithm will iteratively tell the user which marker transforms are finally reliable and which ones are still being verified.
- The calibrator will provide tables with useful information, while the calibration is taking place.
- Finally, once the calibration is done, the frame of each marker and camera will be published in tf2.
Launching the Calibrator
File truncated at 100 lines see the full file