Abstract
Camera networks have gained increased importance in recent years. Previous approaches mostly used point correspondences between different camera views to calibrate such systems. However, it is often difficult or even impossible to establish such correspondences. In this paper, we therefore present an approach to calibrate a static camera network where no correspondences between different camera views are required. Each camera tracks its own set of feature points on a commonly observed moving rigid object and these 2D feature trajectories are then fed into our algorithm. By assuming the cameras can be well approximated with an affine camera model, we show that the projection of any feature point trajectory onto any affine camera axis is restricted to a 13-dimensional subspace. This observation enables the computation of the camera calibration matrices, the coordinates of the tracked feature points, and the rigid motion of the object with a non-iterative trilinear factorization approach. This solution can then be used as an initial guess for iterative optimization schemes which make use of the strong algebraic structure contained in the data. Our new approach can handle extreme configurations, e.g. a camera in a camera network tracking only one single feature point. The applicability of our algorithm is evaluated with synthetic and real world data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.