Abstract

It is known that the pure light field approach for view synthesis relies on a large number of image samples to produce anti-aliased renderings. Otherwise, the insufficiency of image sampling needs to be compensated for by geometry sampling. Currently, geometry estimation is done either offline or using dedicated hardware. Our solution to this dilemma is based on three key ideas: a formal analysis of the equivalency between light field rendering and plane-based warping, multi focus imaging in a multi camera system by plane sweeping, and the fusion of the multi focus image using multi view stereo. The essence of our method is to perform necessary depth estimation up to the level required by the minimal joint image-geometry sampling rate using off-the-shelf graphics hardware. As a result, real-time anti-aliased light field rendering is achieved even if the image samples are insufficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call