Abstract

Light field imaging, which captures spatial-angular information of light incident on image sensors, enables many interesting applications such as image refocusing and augmented reality. However, due to the limited sensor resolution, a trade-off exists between the spatial and angular resolutions. To increase the angular resolution, view synthesis techniques have been adopted to generate new views from existing views. However, traditional learning-based view synthesis mainly considers the image quality of each view of the light field and neglects the quality of the refocused images. In this paper, we propose a new loss function called refocused image error (RIE) to address the issue. The main idea is that the image quality of the synthesized light field should be optimized in the refocused image domain because it is where the light field is viewed. We analyze the behavior of RIE in the spectral domain and test the performance of our approach against previous approaches on both real (INRIA) and software-rendered (HCI) light field datasets using objective assessment metrics such as MSE, MAE, PSNR, SSIM, and GMSD. Experimental results show that the light field generated by our method results in better refocused images than previous methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call