Abstract

Monocular visual odometry (VO) is one of the most practical ways in vehicle autonomous positioning, through which a vehicle can automatically locate itself in a completely unknown environment. Although some existing VO algorithms have proved the superiority, they usually need another precise adjustment to operate well when using a different camera or in different environments. The existing VO methods based on deep learning require few manual calibration, but most of them occupy a tremendous amount of computing resources and cannot realize real-time VO. We propose a highly real-time VO system based on the optical flow and DenseNet structure accompanied with the inertial measurement unit (IMU). It cascade the optical flow network and DenseNet structure to calculate the translation and rotation, using the calculated information and IMU for construction and self- correction of the map. We have verified its computational complexity and performance on the KITTI dataset. The experiments have shown that the proposed system only requires less than 50% computation power than the main stream deep learning VO. It can also achieve 30% higher translation accuracy as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call