Abstract

Synthesizing the image of a 3‐D scene as it would be captured by a camera from an arbitrary viewpoint is a central problem in Computer Graphics. Given a complete 3‐D model, it is possible to render the scene from any viewpoint. The construction of models is a tedious task. Here, we propose to bypass the model construction phase altogether, and to generate images of a 3‐D scene from any novel viewpoint from prestored images. Unlike methods presented so far, we propose to completely avoid inferring and reasoning in 3‐D by using projective invariants. These invariants are derived from corresponding points in the prestored images. The correspondences between features are established off‐line in a semi‐automated way. It is then possible to generate wireframe animation in real time on a standard computing platform. Well understood texture mapping methods can be applied to the wireframes to realistically render new images from the prestored ones. The method proposed here should allow the integration of computer generated and real imagery for applications such as walkthroughs in realistic virtual environments. We illustrate our approach on synthetic and real indoor and outdoor images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.