December 2024

Improving LiDAR–Camera Fusion in ROS 2

8 min read

Exploring techniques for better sensor fusion between LiDAR and camera data in ROS 2, including calibration methods and real-time processing optimizations.

Introduction

Sensor fusion is a critical component in modern autonomous systems. Combining LiDAR's precise distance measurements with camera's rich visual information creates a more robust perception system. In this post, I'll share my experience improving LiDAR-camera fusion in ROS 2.

Why Sensor Fusion?

LiDAR provides excellent 3D spatial information but lacks texture and color details. Cameras offer rich visual context but struggle with depth estimation. By fusing these sensors, we can leverage the strengths of both.

Calibration Challenges

The first major challenge is accurate calibration between the LiDAR and camera coordinate systems. I used the following approach:

  1. Physical mounting and measurement of sensor positions
  2. Checkerboard-based camera calibration
  3. Point cloud to image projection verification
  4. Iterative refinement using real-world data

Implementation in ROS 2

I created custom ROS 2 nodes for:

# Example: Point cloud to image projection
import cv2
import numpy as np

def project_lidar_to_image(point_cloud, camera_matrix, transform_matrix):
    points_3d = point_cloud.reshape(-1, 3)
    points_camera = transform_matrix @ points_3d.T
    points_2d = camera_matrix @ points_camera[:3, :]
    return points_2d.T

Performance Optimizations

Real-time processing required several optimizations:

Results

The improved fusion system achieved:

Conclusion

Sensor fusion is an iterative process. The key is starting with solid calibration and continuously refining based on real-world performance. ROS 2's modular architecture makes it an excellent platform for such work.

Back to Blog