Abstract
This paper presents a novel method for synthesizing a virtual view from two sets of differently focused images taken by an aperture camera array for a scene of two approximately constant depths. The proposed method consists of two steps. The first step is a view interpolation to reconstruct an all in-focus dense light field of the scene. The second step is to synthesize a virtual view from the reconstructed dense light field by using light field rendering technique. The view interpolation in the first step can be achieved simply with linear filters that are designed to shift different object regions separately without estimating the depth map of the scene. The proposed method can effectively create a dense array of pin-hole cameras (i.e., all-focused images) so that the virtual view can be synthesized with higher quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The Journal of The Institute of Image Information and Television Engineers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.