Abstract

Light field imaging can encode abundant scene information, including the intensity and direction of light rays, into 4D light field images. However, the limited number of sensors in commercial light field cameras leads to a trade-off between spatial and angular resolutions. In this paper, an angular super-resolution framework is proposed to synthesize new views and overcome hardware restrictions. First, light field intrinsic feature convolution is proposed to extract intrinsic information, i.e., scene content, complete view correlations, and epipolar structures. Consequently, spatial, angular, and cross-domain information can be preserved in the extracted features. Second, the spatial–angular and depth streams are proposed based on the light field intrinsic feature convolution to synthesize high angular resolution light fields. The spatial–angular stream utilizes the light field intrinsic information to improve the angular resolution, whereas the depth stream disentangles the geometric information from the extracted light field intrinsic features, which is used to warp the given sub-aperture images to the new view positions. Both streams can synthesize high-quality intermediate results, where the intrinsic and geometric information are utilized separately. Finally, a confidence-based stream fusion module is proposed to fuse the outputs from the two streams, achieving the joint employment of light field intrinsic and geometric information and solving the problem of insufficient information exploration in the current methods. We conduct a series of experiments to validate the effectiveness of each component in the framework and demonstrate that our method can achieve state-of-the-art performance in various scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call