Abstract

Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call