Abstract

In this paper, a novel 3D lidar-assisted monocular visual SLAM (LAMV-SLAM) framework is proposed for mobile robots in outdoor environments. LAMV-SLAM can run in real-time without a GPU and build a dense map with real scale. An online photometric calibration thread is integrated into LAMV-SLAM to eliminate the photometric disturbances in images. The tracking thread combines the lidar and vision data to estimate and refine the frame-to-frame transformation. In this thread, the depth fusion algorithm is proposed to provide accurate depth values for the extracted visual features by combining the lidar points, and a novel two-stage optimization method is proposed to utilize the fused lidar-vision data to estimate the camera transformation with the real scale. A parallel mapping thread generates new map points based on depth-filter and lidar-vision data fusion. A loop closing thread further reduces the accumulative errors of the system. To verify the accuracy and efficiency of the system, we evaluated the proposed pipeline on the KITTI odometry benchmark, and our LAMV-SLAM achieves a 0.81% of relative position drift while running at over 3X real-time speed. To verify the robustness of the system in challenging environments, experiments were carried out on the NCLT and NuScenes datasets. Moreover, real-world experiments were applied to our mobile robot platform to show the practicability and validity of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.