Abstract

Although many impressive works on learning-based camera ego-motion estimation methods have been proposed recently, most of them promote the accuracy of camera pose estimation by various sequential learning with loop closure optimization, while neglecting the improvement of PoseNet itself. In this paper, we focus on the coupling of rotation and translation in ego-motion estimation, and design a cascade decoupling structure to separately learn the rotation and translation of camera relative motion between adjacent frames. Meanwhile, a rigid-aware unsupervised learning framework with iterative pose refinement scheme is proposed for camera ego-motion estimation. It can disambiguate rigid motion and deformations in dynamic scenarios by jointly learning of optical flow, stereo disparity and camera pose. Validated with evaluation experiments on the public available datasets, our method is superior to the state-of-the-art unsupervised methods, and can achieve comparable results with the supervised ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call