Knowledge of the ego-vehicle’s motion state is essential for assessing the collision risk in advanced driver assistance systems or autonomous driving. Vision-based methods for estimating the ego-motion of vehicle, i.e., visual odometry, face a number of challenges in uncontrolled realistic urban environments. Existing solutions fail to achieve a good tradeoff between high accuracy and low computational complexity. In this paper, a framework for ego-motion estimation that integrates runtime-efficient strategies with robust techniques at various core stages in visual odometry is proposed. First, a pruning method is employed to reduce the computational complexity of Kanade–Lucas–Tomasi (KLT) feature detection without compromising on the quality of the features. Next, three strategies, i.e., smooth motion constraint, adaptive integration window technique, and automatic tracking failure detection scheme, are introduced into the conventional KLT tracker to facilitate generation of feature correspondences in a robust and runtime efficient way. Finally, an early termination condition for the random sample consensus (RANSAC) algorithm is integrated with the Gauss–Newton optimization scheme to enable rapid convergence of the motion estimation process while achieving robustness. Experimental results based on the KITTI odometry data set show that the proposed technique outperforms the state-of-the-art visual odometry methods by producing more accurate ego-motion estimation in notably lesser amount of time.
Read full abstract