Abstract

Compared to conventional photography which allows capturing spatial intensity only, light field imaging can capture angular and spatial information by collecting information from all directions. This massive information could be used in many applications such as depth estimation and post-capture refocusing. In addition, the emergence of consumer light field cameras has resulted in the widespread use of light field imaging. However, its limited resolution constitutes a major drawback to the use of enormous capabilities provided. In this article, we tried to alleviate the effect of this drawback by using a machine learning algorithm. Our proposed network is built upon existing reconstruction techniques that divide the process into disparity estimation and final image reconstruction. This is achieved by using two consecutive neural networks while the whole network is trained simultaneously. We propose to use a predefined convolutional network at the first stage to decrease preprocessing time, and in addition, we use dual disparity vectors to alleviate the interpolation error whereas warping input images. Our system was trained to reconstruct multi-angular resolution images fast and accurately using real light field images for training. Experimental results demonstrate that the proposed system can reconstruct high-quality images faster than the state-of-the-art techniques.

Highlights

  • Light field imaging can provide us with a lot of information about 3D space

  • We mainly focus on the techniques for increasing angular resolution through view synthesis

  • In order to compare our model with the model proposed by Wu et al [19], we show the quality of the reconstructed 7×7 images using 3×3 input images ‘‘same as reconstructing 4 × 4 images using 2 × 2 input images’’

Read more

Summary

Introduction

Light field imaging can provide us with a lot of information about 3D space. It records light arrays received from all directions separately, in contrast to traditional imaging, which captures only 2D projection of the perceived light by integrating light rays. The plenoptic function mentioned by Adelson and Bergen [1] describes the light based on time, wavelength, and viewing position. It was parameterized in seven dimensions as P(x,y,t,λ,Vx,Vy,Vz). The description of light ray was simplified to be parameterized by its intersections with two planes in arbitrary position L(u,v,s,t) where u and v represent intersection point with the first plane, whereas s and t for intersection with the second plane as mentioned in [2].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call