Abstract

Localization in large-scale environments with robust performance is a persistent challenge for mobile robots. This article proposes a novel system to achieve accurate and robust visual localization performance in large-scale environments with appearance-changing surroundings. Our system starts from a stage of extracting stable visual features with an object segmentation network. After measurement postprocessing and extrinsic precalibration, we propose a graph-based optimization module to estimate the optimal pose as well as extrinsics. We construct optimization constraints with refined wheel odometry, feature matching between images, and correspondences between images and the prebuild map. We evaluate our segmentation module on our proposed datasets and test our localization module with seven sequences (9.8 km total length) in real port scenes with different working conditions from day to night and sunny to rainy. Experiment results demonstrate the decimeter-level accuracy and robust performance of our approach in various challenging scenarios, showing competitive performance compared with state-of-the-art LiDAR-based localization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call