Abstract

Global Navigation Satellite System (GNSS) can provide accurate absolute position but requires fine observational conditions. In urban canyons and indoor scenes, GNSS suffers from severe performance degradation and is even unavailable. Thereby, to provide ubiquitous positioning solutions, it is very necessary to incorporate other complementary sensors. A popular option is the 3D LiDAR sensor, which emits its own light and thus is robust to illumination variation. 3D LiDAR sensors can efficiently capture plentiful point patterns of the ambient environment. Besides perception tasks, the obtained point clouds are also used for relative localization in a registration fashion, known as the LiDAR odometry (LO). Generally, LO is based on structural information, where edge and planar features are extensively exploited. In this work, we propose a novel method aiming to efficiently extract planes from the sparse and noisy 3D LiDAR point clouds. To fully exploit the scanning pattern of this sensor, our method follows a framework of point-to-line-to-plane. The point cloud is firstly projected onto a range image by investigating the azimuth and elevation of each point. In the point-to-line stage, consecutive flat points in a column are grouped into line segments, where a new flat point detector is introduced. In the line-to-plane stage, we extend the classical line extraction method, i.e., Douglas-Peucker algorithm, to find planes in line segments. Considering the over-segmentation caused by occlusion and deformation, we finally conduct region growing and merging to acquire more complete results. Most importantly, we bridge the measurement noise model and the parameter uncertainty via error propagation to determine reasonable thresholds throughout our method. We test the proposed method on datasets collected by various LiDAR sensors. The experiments are conducted in indoor scenes and urban scenes, which contains abundant planar objects such as walls and building facades. Three point-level metrics, namely positive predictive value (PPV), true positive rate (TPR) and F1 score, are employed for quantitative evaluation. The average PPV, TPR, F1 of the proposed method are 89.92%, 86.38% and 88.11%, respectively. The results show that the proposed method is able to recover the dominant planar structure, which is valuable to LO. Moreover, compared to the rotation rate of the LiDAR sensor, which is generally set to 10 Hz, the average runtime of the proposed method is 15.6 ms/frame, so it is qualified for online operation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call