Abstract
In this work we present a novel approach for image based rendering (IBR) of complex real scenes that have been recorded with freely moving hand-operated cameras. The images are automatically calibrated and 3D scene depth maps are computed for each real view. To render a new virtual view, the depth maps of the nearest real views are fused in a scalable fashion to obtain a locally consistent 3D model on the fly. This geometrical representation is based on triangles and can be textured with the images corresponding to the depth maps using hardware-accelerated techniques. When using IBR techniques for complex real outdoor scenes, new views are generated from several hundred or even thousands of images. For interpolating between images the geometrical structure of the scene has to be taken into account to avoid artefacts. Standard approaches like Lightfield or Lumigraph suffer from the missing geometry, others rely on a given model [1]. When using image sequences of uncalibrated handheld cameras, it is nearly impossible to build a globally consistent 3D model automatically. The proposed system relies on locally consistent 3D models which are valid for a particular viewpoint. So for each novel view the local model will be rebuilt to assure consistency. This process is performed at interactive frame rates.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.