Abstract

Considering the field of un-calibrated image sequences and self-calibration, this paper analyzes the use of specific displacements (such as fixed axis rotation, pure translations,...) or specific sets of camera parameters. This allows to induce affine or metric constraints, which can lead to self-calibration and 3D reconstruction. A uniformed formalism for such models already developed in the literature plus some novel models are developed here. A hierarchy of special situations is described, in order to tailor the most appropriate camera model to either the actual robotic device supporting the camera, or to tailor the fact we only have a reduced set of data available. This visual motion perception module leads to the estimation of a minimal 3D parameterization of the retinal displacement for a monocular visual system without calibration, and leads to self-calibration and 3D dynamic analysis. The implementation of these equations is analyzed and experimented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.