Abstract
The simultaneous localization and mapping (SLAM) method estimates vehicles’ pose and builds maps established on the collection of environmental information primarily through sensors such as LiDAR and cameras. Compared to the camera-based SLAM, the LiDAR-based SLAM is more geared to complicated environments and is not susceptible to weather and illumination, which has increasingly become a hot topic in autonomous driving. However, there has been relatively little research on the LiDAR-based SLAM algorithm in rugged scenes. The following two issues remain unsolved: on the one hand, the small overlap area of two adjacent point clouds results in insufficient valuable features that can be extracted; on the other hand, the conventional feature matching method does not take point cloud pitching into account, which frequently results in matching failure. Hence, a LiDAR SLAM algorithm based on neighborhood information constraints (LoNiC) for rugged terrain is proposed in this study. Firstly, we obtain the feature points with surface information using the distribution of the normal vector angles in the neighborhood and extract features with discrimination through the local surface information of the point cloud, to improve the describing ability of feature points in rugged scenes. Secondly, we provide a multi-scale constraint description based on point cloud curvature, normal vector angle, and Euclidean distance to enhance the algorithm’s discrimination of the differences between feature points and prevent mis-registration. Subsequently, in order to lessen the impact of the initial pose value on the precision of point cloud registration, we introduce the dynamic iteration factor to the registration process and modify the corresponding relationship of the matching point pairs by adjusting the distance and angle thresholds. Finally, the verification based on the KITTI and JLU campus datasets verifies that the proposed algorithm significantly improves the accuracy of mapping. Specifically in rugged scenes, the mean relative translation error is 0.0173%, and the mean relative rotation error is 2.8744°/m, reaching the current level of the state of the art (SOTA) method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.