Abstract

The emerging light field images (LFIs) support 6 degrees of freedom (6DoF) user interaction, which is the key feature for future virtual reality (VR) media experiences. Compared to regular 2-D images, LFIs are characterized by particular image structure with both spatial and angular information. In practice, it is infeasible for the user to manually edit each subaperture of the LFI, respectively, and the user cannot guarantee the parallax consistency between different subapertures. To address this problem, we propose a deep-learning-based LFI editing scheme named central view augmentation propagation (CVAP), which employs interleaved spatial-angular convolutional neural networks (4-D CNN) for effective learning of both spatial and angular features from the input LFI. Moreover, for comparison purposes, we also implemented a “direct editing” scheme based on the geometry correspondence between subviews, and another benchmark method based on light field super resolution (LFSR). The experimental results show that CVAP achieved higher PSNR and overall more pleasant visual quality than direct editing and LFSR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call