Abstract

Light field (LF) camera can simultaneously capture the intensity and direction information of light rays, which has been widely concerned. However, limited by the size of the imaging sensor, the captured LF image (LFI) has a trade-off between spatial and angular resolutions. To this end, this paper proposes a new LF super-resolution method using frequency domain analysis and semantic prior, which designs a two-stage learning framework to enhance the spatial and angular resolutions of LFI. Specifically, the proposed method first decomposes the spatial and angular information to explore the 4D structure of LFI by using frequency domain transformation, and formulates the LF super-resolution as a frequency restoration process. Then, the decomposed frequency components are recovered in a progressive restoration manner, with new cascaded 2D and 3D convolutional neural networks. To further improve the quality of the reconstructed LFI, especially at the object boundary, the semantic prior is incorporated into the designed network to enhance its representation ability. Finally, the super-resolved LFI is reconstructed by inverse frequency domain transformation. Experimental results show that the proposed method can effectively generate high-resolution LFI, and outperforms other state-of-the-art methods in terms of both subjective visual perception and objective quality evaluation. Moreover, the proposed method can enhance the performance of LF applications such as depth estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call