Abstract

In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call