Abstract

Template matching visual odometry, as a kind of visual odometry, is very important to the navigation and positioning of mobile robots. However, the traditional template matching visual odometry mainly relying on the Ackerman steering model has limitations. In this paper, a monocular visual odometry is proposed by using template matching and IMU. Improved on the traditional template matching visual odometry, it uses the yaw angle provided by the IMU to replace the rotation angle calculated with the Ackerman steering model. The introduction of IMU makes the visual odometry applicable to any robot or vehicle model. In this paper, several experiments are carried out on different types of roads. The results show that the odometry model can better restrain the divergence of the visual odometry and effectively improve its positioning accuracy because of the introduction of the IMU yaw angle. For a distance as long as 355 meters, the expected position error is about 2.32%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call