Abstract

LiDAR Simultaneous Localization and Mapping (SLAM) is able to map unknown scene while online estimating the LiDAR’s pose. Traditional LiDAR SLAM frameworks mainly rely on geometric features in the environments, but ignore LiDAR intensity information, leading to low accuracy in scenes with sparse environmental features. This paper proposes a novel intensity-based LiDAR-SLAM framework. In the front-end, the geometric and intensity features are extracted to match the two consecutive scans. To keep time efficiency, a self-adaptive feature selection strategy is proposed to select geometric and intensity features and the 6 DOF transformation from the corresponding features are weighted fused for odometry estimation. To improve the effectiveness of loop closure detection (LCD), we propose a novel intensity cylindrical-projection shape context (ICPSC) descriptor and a row-column similarity estimation based on ICPSC. To guarantee the accuracy of LCD, a double-value loop candidate verification strategy is employed. We have conducted comprehensive experimental verification of our LiDAR SLAM framework, including both the public KITTI datasets and data from our platform that involve multiple scenes. The results display that our framework achieves an averaged improvement of about 3 m in scenes with sparse environmental features in terms of on-line localization relative to LeGO-LOAM, while only increases maximum 9 ms in terms of time cost.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.