Abstract

We propose a new algorithm for model-based extrinsic camera calibration that allows one to separate the recovery of the relative orientation of the camera from the recovery of its relative position, given a set of at least three correspondences between model and image points. The key idea is to replace each (real) model point whose correspondence is known by two (virtual) model edges, and then to use the fact that these edges have pairwise intersections in 3D space to derive a set of alignment constraints. We provide a proof that the resulting technique is essentially more powerful than any of the traditional methods for decoupled orientation and position recovery based uniquely on line correspondences. We also present a detailed example of a real-life application that benefits from our work, namely autonomous navigation using distant visual landmarks. We use simulation to show that, for this specific application, our algorithm, when compared to similar techniques, is either significantly more accurate at the same computational cost, or significantly faster with roughly the same average-case accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call