Abstract

Light field (LF) imaging, which can capture spatial and angular information of light-rays in one shot, has received increasing attention. However, the well-known LF spatio-angular trade-off problem has restricted many applications of LF imaging. In order to alleviate this problem, this paper put forward a dual-level LF reconstruction network to improve LF angular resolution with sparselysampled LF inputs. Instead of using 2D or 3D LF representation in reconstruction process, this paper propose an LF directional EPI volume representation to synthesize the full LF. The proposed LF representation can encourage an interaction of spatial-angular dimensions in convolutional operation, which is benefit for recovering the lost texture details in synthesized sub-aperture images (SAIs). In order to extract the high-dimensional geometric features of the angular mapping from low angular resolution inputs to high angular full LF, a dual-level deep network is introduced. The proposed deep network consists of an SAI synthesis sub-network and a detail refinement sub-network, which allows LF reconstruction in a dual-level constraint (i.e., from coarse to fine). Our network model is evaluated on several real-world LF scenes datasets, and extensive experiments validate that the proposed model outperforms the state-of-the-arts and achieves a better reconstruct SAIs perceptual quality as well.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.