|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.
|
fusioncore repositorycompass_msgs fusioncore_core fusioncore_datasets fusioncore_gazebo fusioncore_ros |
ROS Distro
|
Repository Summary
| Checkout URI | https://github.com/manankharwar/fusioncore.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2026-05-09 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
| Name | Version |
|---|---|
| compass_msgs | 0.2.2 |
| fusioncore_core | 0.2.2 |
| fusioncore_datasets | 0.2.2 |
| fusioncore_gazebo | 0.2.2 |
| fusioncore_ros | 0.2.2 |
README
FusionCore
Stay in the loop: biweekly updates on new features, benchmarks, and real-world robot integrations. Subscribe →
ROS 2 sensor fusion: IMU + wheel encoders + GPS fused via UKF at 100 Hz. 22-state filter with IMU bias estimation, adaptive noise covariance, chi-squared outlier rejection on every sensor, GPS velocity fusion for wheel slip detection, and radar Doppler ego-velocity fusion.
Why I built this
I needed sensor fusion for a mobile robot project and reached for robot_localization like everyone does. It works well. But I wanted a filter that estimated IMU gyro and accelerometer bias as part of the state vector, adapted its noise covariance from real sensor behavior rather than config values, and rejected outliers on every sensor update: not just GPS.
So I built FusionCore. It’s a 22-state UKF that fuses IMU, wheel encoders, GPS position, GPS velocity, and radar Doppler ego-velocity natively. Gyro and accelerometer bias are estimated continuously as filter states. Noise covariance adapts from the innovation sequence automatically. Every sensor update goes through a chi-squared gate before it touches the filter. GPS is handled in ECEF directly, no coordinate projection.
GPS velocity fusion (from any receiver that publishes Doppler velocity, like the u-blox F9P) compares GPS-reported speed against wheel-reported speed on every filter cycle. The innovation directly reveals wheel slip: the Kalman gain automatically down-weights a slipping wheel in proportion to how much it disagrees with GPS. Radar Doppler velocity fusion works the same way but using radio wave physics instead of satellites, so it functions indoors, in rain, fog, dust, and complete darkness.
Benchmark
FusionCore vs robot_localization on the NCLT dataset: same IMU + wheel odometry + GPS, no manual tuning. Six sequences:
RL-EKF run with odom0_twist_rejection_threshold: 4.03 and odom1_pose_rejection_threshold: 3.72 (chi²-equivalent to FusionCore’s thresholds at 99.9% confidence).
| Sequence | FC ATE RMSE | RL-EKF ATE RMSE | RL-UKF |
|---|---|---|---|
| 2012-01-08 | 5.6 m | 13.0 m | NaN divergence at t=31 s |
| 2012-02-04 | 9.7 m | 19.1 m | NaN divergence at t=22 s |
| 2012-03-31 | 4.2 m | 54.3 m | NaN divergence at t=18 s |
| 2012-08-20 | 7.5 m | 24.1 m | NaN divergence |
| 2012-11-04 | 28.6 m | 9.6 m | NaN divergence |
| 2013-02-23 | 4.1 m | 11.0 m | NaN divergence |
Try it yourself
No ROS required (30 seconds):
git clone https://github.com/manankharwar/fusioncore && cd fusioncore
pip install numpy matplotlib
python3 tools/demo_quick.py --open
Generates a side-by-side trajectory comparison from the NCLT benchmark results included in the repository. No datasets to download, no ROS installation needed.
Live demo with real sensor data (5 minutes):
# Build FusionCore, then:
bash demo/run_demo.sh
Downloads a 5 MB demo bag, runs FusionCore live against 120 seconds of real outdoor robot data (IMU + wheel odometry + GPS), and generates the comparison plot automatically.
See demo/README.md for full instructions, including running on your own robot bag.
Does it work on a real robot with messy sensors?
Does it tolerate imperfect IMU calibration?
Yes. adaptive.imu: true (default) automatically adjusts the measurement noise matrix from the innovation sequence. init.stationary_window: 2.0 estimates accelerometer bias before motion starts. In the NCLT benchmark, FusionCore was given only the two noise values from the IMU datasheet with no manual tuning of any other parameter.
How much manual tuning is needed?
Two numbers from your IMU datasheet: imu.gyro_noise (ARW spec) and imu.accel_noise (VRW spec). Everything else starts at default. Adaptive noise covariance handles the rest within the first minute of operation. Most companies copy approximate values from a CAD model and launch. FusionCore is designed to work under exactly those conditions.
What about timestamp jitter and delayed GPS?
FusionCore stores a rolling IMU buffer and replays intermediate updates when a delayed GPS fix arrives (retrodiction, up to 500 ms configurable). Timestamp jitter is handled by clamping dt to [min_dt, max_dt]. Out-of-order messages are absorbed without divergence.
What is the CPU cost? A 22-state UKF at 100 Hz generates 45 sigma points per predict step. On a laptop Intel i7: under 0.2 ms per cycle. On Raspberry Pi 4: under 1 ms per cycle. Peak RAM under 50 MB. FusionCore uses Eigen for all matrix math; no external numeric libraries required.
Does it behave the same on ARM (Raspberry Pi, Jetson)? Yes. Eigen auto-detects NEON on ARM and AVX on x86. The NCLT benchmark is reproducible on ARM within floating-point rounding tolerance. The same binary runs on laptop, Pi 4, and Jetson Orin without recompilation or parameter changes.
Coming from robot_localization?
If any of these have bitten you, FusionCore was built with them in mind:
| robot_localization issue | What FusionCore does instead |
|---|---|
| UKF diverges with NaN on GPS-heavy sequences (#780, #777) | Chi-squared gate on every sensor; covariance bounded at each step. All six NCLT sequences finish without NaN. |
File truncated at 100 lines see the full file
CONTRIBUTING
Contributing to FusionCore
Thanks for your interest. Contributions are welcome: hardware configs, bug fixes, tests, and documentation all help.
The fastest way to contribute
The most impactful contributions right now are hardware configs. If you have FusionCore running on a robot, platform, or IMU that isn’t in the repo yet, open a PR adding a YAML under fusioncore_ros/config/. See the hardware config section below.
Before you start
- Check open issues: the bug may already be reported
- Check Discussions: the question may already be answered
- For anything bigger than a typo fix, open an issue or Discussion first so we can align before you write code
Development setup
# Clone and build
git clone https://github.com/manankharwar/fusioncore.git
cd fusioncore
source /opt/ros/jazzy/setup.sh # replace jazzy with humble on Ubuntu 22.04
rosdep install -r --from-paths . --ignore-src --rosdistro jazzy -y # replace jazzy with humble on Ubuntu 22.04
colcon build --packages-up-to compass_msgs fusioncore_core fusioncore_ros --cmake-args -DBUILD_TESTING=ON
# Run all tests before and after your change
colcon test --packages-select compass_msgs fusioncore_core fusioncore_ros
colcon test-result --verbose
All 49 tests must pass. CI will catch it if they don’t.
Hardware configs
A hardware config is a YAML file under fusioncore_ros/config/ named after the platform (e.g. clearpath_husky.yaml, ublox_f9p.yaml).
Copy fusioncore_ros/config/fusioncore.yaml as the starting point and adjust:
-
imu.gyro_noise/imu.accel_noise: pull from your IMU’s datasheet -
gnss.base_noise_xy: your GPS receiver’s CEP spec - Any topic remaps specific to your platform
Add a comment at the top with: platform name, IMU model, GPS receiver model, and whether it was field-tested or tuned from datasheet only. Field-tested configs get merged faster.
Pull request checklist
-
All 49 tests pass (
colcon test-result --verboseshows 0 failures) -
For new features: tests added in
fusioncore_core/tests/ - For hardware configs: YAML includes a comment with platform + sensor details
- Commit message describes why, not just what
Code style
C++17. Follow the style of the surrounding code: no reformatting unrelated lines. clang-format is not enforced but is appreciated.
Reporting bugs
Use the Bug Report issue template. Include the output of colcon test-result --verbose if tests are involved.
Questions
Open a Discussion rather than an issue. Issues are for bugs and tracked work; Discussions are for questions, configs, and ideas.
Response time: typically within 24 hours.