Abstract

Layered light field display, which consists of a backlight and several light attenuating layers, has been attracting attentions because of its potential to simultaneously support many viewing directions and high resolution for each direction. The transmittances of the layers’ pixels can be controlled individually, and are determined inversely from expected observation for each viewing direction. The expected observations are typically represented as a set of multi-view images. We have developed a simulator of the layered light field display using computer graphics technology, and evaluated the quality of displayed images (output quality) using real multi-view images as input. An important finding from the evaluation is that aliasing artifacts are occasionally observed from the directions without input images. To prevent aliasing artifacts, it is necessary to limit the disparities between neighboring input images into ±1 pixel according to plenoptic sampling theory, which requires significantly small viewpoint intervals. However, it is not always possible to capture such dense multi-view images that would satisfy the aliasing-free condition. To tackle this problem, we propose to use image based rendering techniques for synthesizing sufficiently dense virtual multi-view images from actually photographed images. We demonstrate that by using our method, high quality visualization without aliasing artifacts is possible even when the photographed multi-view images are sparse.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call