Abstract

Robust long-term visual localization is challenging for a mobile robot, especially in changing environments, where the dynamic scene changes degrade localization accuracy or even cause failures. Most of the existing methods eliminate dynamic changes as outliers, which strictly rely on static world assumptions. Conversely, we efficiently exploit the hidden regularities of changes for improving localization performance. In particular, we design a feature existence state (FES) matrix to measure the evolution of time-varying changes, which is built incrementally over long-term runs. To address the timeliness problems of fixed parameters in offline-trained models, we propose an adaptive online stochastic learning (AOSL) method to model and predict the changing regularities of streaming feature states. Therefore, the features with the largest probability of being observed can be selected for boosting visual localization. Leveraging the proposed AOSL method, we develop a lightweight and robust long-term topological localization system. Furthermore, the performance of our method is compared against the state-of-the-art methods in different challenging scenes, including both the public benchmark and real-world experiments. Extensive experimental results validate that our method achieves better localization accuracy and memory efficiency, and has competitive real-time performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.