Abstract

Visual odometry (VO) is known as an essential part of visual SLAM, it serves as a driving engine of various autonomous navigation systems. Traditional visual odometry recovers the camera motion from a pair of consecutive images, known as the frame-to-frame approach. This paper introduces a multiple frame integration for stereo visual odometry with the aim of reducing drifts by refining transformation and feature location consecutively. Firstly, the rotation is accurately estimated from frame-to-frame VO based on an essential matrix and then refined by utilizing a loop closure constraint of three consecutive camera frames. Secondly, 2D feature locations gradually are updated from their corresponding points in the previous frame through epipolar constraints. An experimental comparison is conducted using a publicly available benchmark dataset, KITTI dataset, which reinforces the accuracy improvement of the proposed approach for both rotation and translation compared to the traditional approaches by around 20% in the same experimental conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call