Abstract

Common approaches for vehicle localization pro-pose to match LiDAR data or 2D features from cameras to a prior 3D LiDAR map. Yet, these methods require both heavy computational power often provided by GPU, and a first rough localization estimate via GNSS to be performed online. Moreover, storing and accessing 3D dense LiDAR maps can be challenging in case of city-wide coverage.In this paper, we address the problem of camera global relocalization in a prior 3D line-feature map from a single image, in a GNSS denied context and with no prior pose estimation. We propose a dual contribution.(1) We introduce a novel pose estimation method from lines, (i.e. Perspective-n-Line or PnL), with a known vertical direction. Our method benefits a Gauss-Newton optimization scheme to compensate the sensor-induced vertical direction errors, and refine the overall pose. Our algorithm requires at least 3 lines to output a pose (P3L) and requires no reformulation to operate with a higher number of lines.(2) We propose a RANSAC (RANdom SAmple Consensus) 2D-3D line matching and outliers removal algorithm requiring solely one 2D-3D line pair to operate, i.e. RANSAC1. Our method reduces the number of iteration required to match features and can be easily modified to exhaustively test all feature combinations.We evaluate the robustness of our algorithms with a synthetic data, and on a challenging sub-sequence of the KITTI dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call