Abstract

Deep convolutional neural networks (CNN) have achieved significant success in single image super-resolution (SISR) task. Though previous works have proved that deeper networks yield better performance, simply stacking more layers to deepen networks ignores the complementarity between different feature levels and largely increases the computation cost, and leads to limited image reconstruction quality. In this paper, we aim to separately extract shallow and deep features to enforce their respective contribution to the recovering of the super-resolution image. We propose a Two-Stream Sparse Network (TSSN) in which we design sparse residual block (SRB) to efficiently extract local features and explicitly learn shallow and deep features via two-stream architecture. Besides, we apply attention mechanism to effectively aggregate shallow and deep features. Extensive experiments demonstrate that the proposed method outperforms most of existing state-of-the-art methods on benchmark datasets in both quantitative metric and visual quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.