LiDAR-assisted visual odometry (VO) is a widely-used solution for pose estimation and mapping. However, most existing LiDAR-assisted VO systems could suffer from the problems of 1) lacking distinctive and evenly distributed pixels for tracking due to the sparsity of LiDAR points and limited FOV overlap between a camera and LiDAR, and 2) nontrivial errors when processing LiDAR point clouds. To address above problems, we present CR-LDSO, a direct sparse LiDAR-assisted VO with the core parts being: 1) a novel cloud reusing method with point extraction/re-extraction to increase both the camera-LiDAR FOV overlap and the number of high-quality tracking pixels and 2) an occlusion removal method to exclude mismatching pixels due to occluded 3D object from sliding-window optimization and a point extraction strategy without depth interpolation. Extensive experimental results on public datasets demonstrates the superiority of our method to the existing state-of-the-art methods.
Read full abstract