Abstract
For autonomous vehicles, Simultaneous localization and mapping (SLAM) is one of the fundamental capabilities. Accurate and reliable SLAM are important for autonomous vehicles. In this work, we propose a novel LiDAR odometry and mapping method assisted by semantic segmentation and moving object segmentation. First, to acquire semantic information of point clouds and distinguish moving objects, a framework for segmenting LiDAR point clouds is proposed. Then an effective method for integrating semantic information and moving object information into feature-based LiDAR SLAM is proposed. With the assistance of semantic information and moving object information, moving points are filtered out, and semantic constrains are added in feature extraction and pose estimation to improve the localization accuracy. The experiment results on public datasets show that, compared to the baseline, the average relative pose estimation error of our proposed method is reduced by21.4% in rotation and 29.4% in translation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have