Abstract

The emerging light field images (LFIs) support 6 degrees of freedom (6DoF) user interaction, which is the key feature for future virtual reality (VR) media experiences. Compared to regular 2-D images, LFIs are characterized by particular image structure with both spatial and angular information. In practice, it is infeasible for the user to manually edit each subaperture of the LFI, respectively, and the user cannot guarantee the parallax consistency between different subapertures. To address this problem, we propose a deep-learning-based LFI editing scheme named central view augmentation propagation (CVAP), which employs interleaved spatial-angular convolutional neural networks (4-D CNN) for effective learning of both spatial and angular features from the input LFI. Moreover, for comparison purposes, we also implemented a “direct editing” scheme based on the geometry correspondence between subviews, and another benchmark method based on light field super resolution (LFSR). The experimental results show that CVAP achieved higher PSNR and overall more pleasant visual quality than direct editing and LFSR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.