Abstract
Light field (LF) images taken by plenoptic cameras can record spatial and angular information from real-world scenes, and it is beneficial to fully integrate these two pieces of information to improve image super-resolution (SR). However, most of the existing approaches to LF image SR cannot fully fuse the information at the spatial and angular levels. Moreover, the performance of SR is hindered by the ability to incorporate distinctive information from different views and extract informative features from each view. To solve these core issues, we propose a fusion and allocation network (LF-FANet) for LF image SR. Specifically, we have designed an angular fusion operator (AFO) to fuse distinctive features among different views, and a spatial fusion operator (SFO) to extract deep representation features for each view. Following these two operators, we further propose a fusion and allocation strategy to incorporate and propagate the fusion features. In the fusion stage, the interaction information fusion block (IIFB) can fully supplement distinctive and informative features among all views. For the allocation stage, the fusion output features are allocated to the next AFO and SFO for further distilling the valid information. Experimental results on both synthetic and real-world datasets demonstrate that our method has achieved the same performance as state-of-the-art methods. Moreover, our method can preserve the parallax structure of LF and generate faithful details of LF images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.