Abstract

This paper presents how the image-based rendering technique ofview dependent texture-mapping (VDTM) can be efficiently implementedusing projective texture mapping, a feature commonly available inpolygon graphics hardware. VDTM is a technique for generating novelviews of a scene with approximately known geometry making maximal useof a sparse set of original views. The original presentation of VDTMby Debevec, Taylor, and Malik required significant perpixelcomputation and did not scale well with the number of original images.In our technique, we precompute for each polygon the set of originalimages in which it is visibile and create a “view map” datastructure that encodes the best texture map to use for a regularlysampled set of possible viewing directions. To generate a novel view,the view map for each polygon is queried to determine a set of no morethan three original images to blend together in order to render thepolygon with projective texture-mapping. Invisible triangles areshaded using an object-space hole-filling method. We show how therendering process can be streamlined for implementation on standardpolygon graphics hardware. We present results of using the method torender a large-scale model of the Berkeley bell tower and itssurrounding campus enironment.KeywordsTexture MappingCamera PositionGraphic HardwareVertex ColorOriginal ViewThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call