Abstract

Automotive visual odometry has become a highly researched topic. The published works have turned cameras into a very precise source of ego motion estimation. However, for automotive application, visual odometry has to be integrated into a sensor fusion to simultaneously obtain global localization, maximum availability as well as highest precision. Since any vi- sual odometry remains sensitive to its environment, good motion estimation cannot always be guaranteed. Consequently, a self-validation scheme is one of the barriers towards application in a sensor fusion system. In order to solve this problem, we first formulate an Ackermann vehicle’s motion as a function of its forward speed and yaw rate. Secondly, we present a data-driven model, which achieves to reconstruct the sideward speed as a function of the yaw rate only. As we show, both models reach the quality of the estimated sideward motion from visual odometry. The therewith achieved redundancy can be used for different tasks. Of course, the estimation of the sideward motion can be excluded from the visual odometry scheme to save computation time. This is of special interest for monocular systems, where yet no absolute scale of a translation motion could be directly calculated. Instead, we propose to maintain the estimation of the sideward motion in the visual odometry and to compare the result to the modeled motion. As we show, the emerging deviation is a very good metric for self-validation of the overall visual odometry estimate. Integrating the resulting method into our visual odometry system, we currently achieve the best frame-to-frame result in the KITTI benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call