Acta Optica Sinica, Vol. 40, Issue 24, 2428002 (2020)
Research on Ground-Plane-Based Monocular Aided LiDAR SLAM
Yan Xiaobin**, Peng Daogang*, and Qi Erjiang***
- College of Automation Engineering, Shanghai University of Electric Power, Shanghai 200090, China
The fusion of a vision sensor and LiDAR can achieve a simultaneous localization and mapping (SLAM) system superior to a single sensor. However, the existing vision and LiDAR fusion algorithms still have such problems as high computational complexity and the system accuracy and stability susceptible to wrong depth matching. In order to combine vision and LiDAR information more efficiently and robustly, we made full use of ground plane information in the images and LiDAR point clouds, and proposed an efficient SLAM algorithm of vision-assisted LiDAR. Firstly, the ground point cloud was segmented from the laser point cloud to extract the ground ORB feature points in the images, and feature matching was verified by the cross-ratio invariance in the homography transformation. In this way, the absolute scale motion estimation of camera was realized efficiently and robustly via the homography matrix decomposition. Then, the obtained motion estimate of the camera was interpolated in the form of Lie group SE(3) to correct the point cloud distortion generated by the LiDAR during its own motion. Finally, the motion estimate of the monocular camera was taken as the initial value for the position optimization of LiDAR odometry. The test results of KITTI, a public data set, and the actual environment show that the proposed algorithm can effectively employ the motion estimate of the camera to correct the point cloud distortion of LiDAR and achieve odometry and mapping in real time and accurately.
Please Enter Your Email: