Abstract

A new computational approach to estimate the ego-motion of a camera from sets of point correspondences taken from a monocular image sequence is presented. The underlying theory is based on a decomposition of the complete set of model parameters into suitable subsets to be optimized separately; e.g., all stationary parameters concerning camera calibration are adjusted in advance (calibrated case). The first part of the paper is devoted to the description of the mathematical model, the so-called conic error model. In contrast to existing methods, the conic error model permits us to distinguish between feasible and nonfeasible image correspondences related to 3D object points in front of and behind the camera, respectively. Based on this “half-perspective” point of view, a well-balanced objective function is derived that encourages the proper detection of mismatches and distinct relative motions. In the second part, some results of tests featuring natural image sequences are presented and analyzed. The experimental study clearly shows that the numerical stability of the new approach is superior to that achieved by comparable methods in the calibrated case based on a “full-perspective” modeling and the related epipolar geometry. Accordingly, the accuracy of the resulting ego-motion estimation turns out to be excellent, even without any further temporal filtering.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.