Main > Acta Optica Sinica >  Volume 40 >  Issue 24 >  Page 2428002 > Article
  • Abstract
  • Abstract
  • View Summary
  • Figures (10)
  • Tables (2)
  • Equations (0)
  • References (27)
  • Get PDF(in Chinese)
  • Paper Information
  • Received: Jul. 20, 2020

    Accepted: Sep. 15, 2020

    Posted: Dec. 1, 2020

    Published Online: Dec. 3, 2020

    The Author Email: Yan Xiaobin (tobelegend@hotmail.com), Peng Daogang (pengdaogang@126.com), Qi Erjiang (xinbdzh@163.com)

    DOI: 10.3788/AOS202040.2428002

  • Get Citation
  • Copy Citation Text

    Xiaobin Yan, Daogang Peng, Erjiang Qi. Research on Ground-Plane-Based Monocular Aided LiDAR SLAM[J]. Acta Optica Sinica, 2020, 40(24): 2428002

    Download Citation

  • Category
  • Remote Sensing and Sensors
  • Share
Acta Optica Sinica, Vol. 40, Issue 24, 2428002 (2020)

Research on Ground-Plane-Based Monocular Aided LiDAR SLAM

Yan Xiaobin**, Peng Daogang*, and Qi Erjiang***

Author Affiliations

  • College of Automation Engineering, Shanghai University of Electric Power, Shanghai 200090, China

Abstract

The fusion of a vision sensor and LiDAR can achieve a simultaneous localization and mapping (SLAM) system superior to a single sensor. However, the existing vision and LiDAR fusion algorithms still have such problems as high computational complexity and the system accuracy and stability susceptible to wrong depth matching. In order to combine vision and LiDAR information more efficiently and robustly, we made full use of ground plane information in the images and LiDAR point clouds, and proposed an efficient SLAM algorithm of vision-assisted LiDAR. Firstly, the ground point cloud was segmented from the laser point cloud to extract the ground ORB feature points in the images, and feature matching was verified by the cross-ratio invariance in the homography transformation. In this way, the absolute scale motion estimation of camera was realized efficiently and robustly via the homography matrix decomposition. Then, the obtained motion estimate of the camera was interpolated in the form of Lie group SE(3) to correct the point cloud distortion generated by the LiDAR during its own motion. Finally, the motion estimate of the monocular camera was taken as the initial value for the position optimization of LiDAR odometry. The test results of KITTI, a public data set, and the actual environment show that the proposed algorithm can effectively employ the motion estimate of the camera to correct the point cloud distortion of LiDAR and achieve odometry and mapping in real time and accurately.

keywords

Please Enter Your Email: