Abstract

Most approaches to robot visual localization rely on local, global visual or semantic information as observation. In this paper, the combination of global visual and semantic information is used as landmark in the observation model of Bayesian filters. Introducing the improved Gaussian Process into observation models with visual information, The GP-Localize algorithm is extended to high dimensional data, which means that all the historical data with spatiotemporal correlation to achieve constant time and memory for persistent outdoor robot localization can be considered by the Bayesian filters. The other contribution of this paper is we combine the all above parts into a system for robot visual localization and apply it to two real-world outdoor datasets including unmanned ground vehicle (UGV) and unmanned aerial vehicle (UAV). According to the experimental results, it's no difficult to find there is a higher accuracy while using global vision and semantic results rather than just using single feature. By using the combined features and improved Gaussian process approximation method in Bayes filters, our system is more robust and practical than existing localization systems such as ORB-SLAM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call