Abstract

A light field contains information in four dimensions, two spatial and two angular. Representing a light field by sampling it with a fixed number of pixels implies an inherent trade-off between angular resolution and spatial resolution– one apparently fixed at the time of capture. To enable flexible trade-offs in spatial and angular resolution after the fact, in this paper we apply techniques from super resolution in an integrated fashion. Our approach explores the similarity between light field super resolution (LFSR) and single image super resolution (SISR) and proposes a neural network framework that can carry out flexible super resolution tasks. We present concrete instances of the framework for center-view spatial LFSR, full-view spatial LFSR, and combined spatial and angular LFSR. Experiments with synthetic and real-world data sets show the center-view and full-views approaches outperform state-of-the-art spatial LFSR by over 1dB in PSNR and that the combined approach achieves comparable performance to state-of-the-art spatial LFSR algorithms. Visual results for images rendered from the combined approach show improved resolution of detail, without rendering artifacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call