Abstract

Vision and odometry/attitude and heading reference system (AHRS, three-axial gyroscopes, accelerometers, and magnetometers) sensors fusion strategy is prevalent in the recent years for the robot localization, due to its low cost and effectiveness in global positioning system (GPS)-denied environments. In this paper, a new adaptive estimation algorithm is proposed to estimate the robot position by fusing the monocular vision and odometry/AHRS sensors, and utilizing the properties of perspective projection. By the new method, the robot can be localized in real time in the GPS-denied and mapless environments, and the localization results can be theoretically proved convergent to their real values. Compared to other methods, our algorithm is simple to implement and suitable for parallel processing. To achieve the real-time performance, the algorithm is implemented in parallel using graphics processing unit (GPU), and therefore, it can be easily integrated into mobile robots' tasks like navigation and motion control, which need the real-time localization information. Simulations and experiments were conducted to validate the good convergence and longtime robustness performances of the proposed real-time localization algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call