Abstract

In this paper, we present a deep nonparametric Bayesian method to synthesize a light field from a single image. Conventionally, light-field capture requires special optical architecture, and the gain in angular resolution often comes at the expense of a reduction in spatial resolution. Techniques for computationally generating the light field from a single image can be expanded further to a variety of applications, ranging from microscopy and materials analysis to vision-based robotic control and autonomous vehicles. We treat the light field as multiple sub-aperture views, and to compute the novel viewpoints, our model contains three major components. First, a convolutional neural network is used for predicting the depth probability map from the image. Second, a multi-scale feature dictionary is constructed within a multi-layer dictionary learning network. Third, the novel views are synthesized taking into account both the probabilistic depth map and the multi-scale feature dictionary. The experiments show that our method outperforms several state-of-the-art novel view synthesis methods in delivering good image resolution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call