Abstract
Given multiple image data from a set of points in 3D, there are two fundamental questions that can be addressed: What is the structure of the set of points in 3D? What are the positions of the cameras relative to the points? In this paper we show that, for projective views and with structure and position defined projectively, these problems are dual because they can be solved using constraint equations where space points and camera positions occur in a reciprocal way. More specifically, by using canonical projective reference frames for all points in space and images, the imaging of point sets in space by multiple cameras can be captured by constraint relations involving three different kinds of parameters only, coordinates of: (1) space points, (2) camera positions (3) image points. The duality implies that the problem of computing camera positions fromp points in q views can be solved with the same algorithm as the problem of directly reconstructing q+4 points in p-4 views. This unifies different approaches to projective reconstruction: methods based on external calibration and direct methods exploiting constraints that exist between shape and image invariants.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.