Abstract

The technology for simultaneous localization and mapping (SLAM) has been well investigated with the rising interest in autonomous driving. Visual odometry (VO) is a variation of SLAM without global consistency for estimating the position and orientation of the moving object through analyzing the image sequences captured by associated cameras. However, in the real-world applications, we are inevitably to experience drift error problem in the VO process due to the frame-by-frame pose estimation. The drift can be more severe for monocular VO compared with stereo matching. By jointly refining the camera poses via several local keyframes and the coordinate of 3D map points triangulated from extracted features, bundle adjustment (BA) can mitigate the drift error problem only to some extent. To further improve the performance, we introduce a traffic sign feature-based joint BA module to eliminate and relieve the incrementally accumulated pose errors. The continuously extracted traffic sign feature with standard size and planar information will provide powerful additional constraints for improving the VO estimation accuracy through BA. Our framework can collaborate well with existing VO systems, e.g., ORB-SLAM2, and the traffic sign feature can also be replaced with feature extracted from other size-known planar objects. Experimental results by applying our traffic sign feature-based BA module show an improved vehicular localization accuracy compared with the state-of-the-art baseline VO method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call