Abstract

In this contribution we focus on calibration and 3D surface modeling from uncalibrated images. A large number of images from a scene is collected with a hand-held camera by simply waving the camera around the objects to be modeled. The images need not be taken in sequential order, thus either video streams or sets of still images may be processed. Since images are taken from all possible viewpoints and directions, we are effectively sampling the viewing sphere around the objects.Viewpoint calibration is obtained with a structure-from-motion approach that tracks salient image points over multiple images. The calibration exploits the topology of the viewpoint distribution over the viewing sphere and builds a viewpoint mesh that connects all nearby viewpoints, resulting in a robust multi-image calibration. For each viewpoint a depth map is estimated that considers all corresponding image matches of nearby viewpoints. All depth maps are fused to generate a viewpoint- independent 3D surface representation based on a volumetric voting scheme. A voxel space is built into which the depth estimates from all the viewpoints are projected, together with their estimation uncertainty. Integration over all depth estimates determines a probability density distribution of the estimated scene surface. The approach was verified on long image sequences obtained with a hand-held video camera.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.