Abstract

Visual odometry uses image information to estimate the ego-motion of a camera and is a practical technique for the navigation of autonomous vehicles. This paper presents a novel stereo-based ego-motion estimation method for this purpose. The proposed method employs a motion decoupling strategy that uses a set of distant feature points to estimate the camera’s rotation and a set of close feature points to estimate the translation. The set of feature points used for the motion estimation is carefully screened with bucketing plus aging techniques by considering their spatial distribution and temporal evolution. It has been found that the age of the tracked features has a significant impact on motion estimation. Older features generate less tracking error and should be selected for motion estimation. The proposed method is tested on the KITTI benchmark dataset and evaluated by comparing it with the existing visual odometry systems. The experimental results demonstrate that the proposed method can significantly improve the calculation efficiency and, meanwhile, guarantee a satisfactory accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call