Abstract

We propose a probabilistic framework for multi-modal global localisation using 3D point correspondences without needing to integrate over SE(3) for Bayesian inference. A finite set of transformation candidates is constructed by decomposing the known global map into local places and computing the maximum likelihood transformation at each place using place specific 3D correspondences. An acceptance region around the maximum a posteriori candidate is then used to calculate the certainty of the location estimate. The 3D correspondences used consist of 3D positions estimated by a LiDAR and horizon points observed by cameras. Our empirical results show that visual correspondences can increase the certainty of the estimated location and improve localisation performance when far away from the trajectory used to construct the known global map. We analyse situations where improved rotation estimation of the transformation candidates reduces the certainty of the localisation. We also highlight the efficacy of the certainty as a measure of success and show that the framework's success rate is increased by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{12}\%$</tex-math></inline-formula> when using the certainty as a termination criterion compared to a state-of-the-art LiDAR intensity benchmark (Guo, 2019).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call