Lightweight Stereo Vision and 2D LiDAR Fusion for Robust Environmental Perception on Embedded Robotic Platforms Academic Article in Scopus uri icon

abstract

  • The development of robust and reliable methods for environmental perception of robotic systems is an important yet challenging step in the field of Autonomous Mobile Robots. Sensor Fusion arises as a promising technique for safer and enhanced environmental perception; however, it often involves complicated algorithms and complex data processing. Herein, we report the development of a computationally efficient algorithm for accurate Sensor Fusion of 2D LiDAR and 3D Stereo Vision point clouds. Both LiDAR and Stereo Vision were powered by a Waveshare JetRacer ROS AI Kit. The Stereo Vision system was calibrated to obtain the 3D location of image features to build a 3D point cloud. These features were obtained via the ORB algorithm and matched using Brute-Force. By virtue of both Point Clouds having similar data representation, aligning the Stereo Vision point cloud with the LiDAR data enabled a straightforward fusion process. Results demonstrate the method's potential for real-time performance, which was found to be a function of image resolution. The JetRacer kit achieved a 3D Stereo Vision point cloud computation rate of up to 5 frames per second (fps) at a resolution of 640x480. The implemented Sensor Fusion algorithm enables real-time environmental perception for several robotic applications, such as Scene Reconstruction, Autonomous Driving, Path Planning and Transportation for Rescue Vehicles, and Precision Agriculture for crop monitoring. © 2025 IEEE.

publication date

  • January 1, 2025