Abstract

Global localization on LiDAR point cloud maps is a challenging task because of the sparse nature of point clouds and the large size difference between LiDAR scans and the maps. In this paper, we solve the LiDAR-based global localization problem based upon the plane-motion assumption. We first project the clouds into Bird’s-eye View (BV) images and transform the problem into a BV image matching problem. We then introduce a novel local descriptor, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> , Histogram of Orientations of Principal Normals (HOPN), to perform matching. The HOPN descriptor encodes the point normals of the clouds, and is more effective in matching BV images than the common image descriptors. In addition, we present the consensus set maximization algorithm to robustly estimate a rigid pose from the HOPN matches in the case of the low inlier ratio. The experimental results on three large-scale datasets show that our method achieves state-of-the-art global localization performance when using either single LiDAR scans or local maps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call