Abstract

Robust long-term visual localization is challenging for a mobile robot, especially in changing environments, where the dynamic scene changes degrade localization accuracy or even cause failures. Most of the existing methods eliminate dynamic changes as outliers, which strictly rely on static world assumptions. Conversely, we efficiently exploit the hidden regularities of changes for improving localization performance. In particular, we design a feature existence state (FES) matrix to measure the evolution of time-varying changes, which is built incrementally over long-term runs. To address the timeliness problems of fixed parameters in offline-trained models, we propose an adaptive online stochastic learning (AOSL) method to model and predict the changing regularities of streaming feature states. Therefore, the features with the largest probability of being observed can be selected for boosting visual localization. Leveraging the proposed AOSL method, we develop a lightweight and robust long-term topological localization system. Furthermore, the performance of our method is compared against the state-of-the-art methods in different challenging scenes, including both the public benchmark and real-world experiments. Extensive experimental results validate that our method achieves better localization accuracy and memory efficiency, and has competitive real-time performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call