Abstract
Global localization on LiDAR point cloud maps is a challenging task because of the sparse nature of point clouds and the large size difference between LiDAR scans and the maps. In this paper, we solve the LiDAR-based global localization problem based upon the plane-motion assumption. We first project the clouds into Bird’s-eye View (BV) images and transform the problem into a BV image matching problem. We then introduce a novel local descriptor, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> , Histogram of Orientations of Principal Normals (HOPN), to perform matching. The HOPN descriptor encodes the point normals of the clouds, and is more effective in matching BV images than the common image descriptors. In addition, we present the consensus set maximization algorithm to robustly estimate a rigid pose from the HOPN matches in the case of the low inlier ratio. The experimental results on three large-scale datasets show that our method achieves state-of-the-art global localization performance when using either single LiDAR scans or local maps.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: IEEE Transactions on Intelligent Vehicles
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.