Abstract
Stereoscopic and auto-stereoscopic monitors usually produce visual fatigue in the audience due to the convergenceaccommodation conflict (the discrepancy between actual focal distance and depth perception). An attractive alternative to these technologies is integral photography (integral imaging, InI), initially proposed by Lippmann in 1908,1 and reintroduced approximately two decades ago due to the fast development of electronic matrix sensors and displays. Lippmann’s concept is that one can store the 3D image of an object by acquiring many 2D elemental images of it from different positions. This is readily achieved by using a microlens array (MLA) as the camera lens. When the elemental images are projected onto a 2D display placed in front of an MLA, the different perspectives are integrated as a 3D image. Every pixel of the display generates a conical ray bundle when it passes through the array. The intersection of many ray bundles produces a local concentration of light density that permits object reconstruction. The resulting scene is perceived as 3D by the observer, whatever his or her position relative to the MLA. Since an InI monitor truly reconstructs the 3D scene, the observation is producedwithout special goggles, with full parallax, and with no visual fatigue.2 An important challenge of projecting integral images in a monitor is the structural differences between the capture setup and the display monitor. Addressing this challenge, we have developed an algorithm that we call smart pseudoscopic-toorthoscopic conversion (SPOC). It permits the calculation of new sets of synthetic elemental images (SEIs) that are fully adapted to the display monitor characteristics. Specifically, this global pixel-mapping algorithm permits one to select the MLA Figure 1. Schematic of the experimental setup used for capturing an integral image of a 3D scene.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have