Abstract

Recent years have witnessed great success of Single Image Super-Resolution (SISR) with convolutional neural network (CNN) based models. Most existing Super-Resolution (SR) networks use bicubic upscaled images as input or directly use low-resolution images as input and do transposed convolution or sub-pixel convolution only in the reconstruction stage which do not use the hierarchical features across the network for final reconstruction. In this paper, we propose a novel stacked U-shape networks with channel-wise attention (SUSR) for SISR. In general, the proposed network consists of four parts, which are shallow feature extraction block, stacked U-shape blocks which produce high-resolution features, residual channel-wise attention blocks and reconstruction block respectively. The hierarchical high-resolution features produced by U-shape blocks have the same size with the final super-resolved image, thus different to existing methods we do upsampling operator in U-shape blocks. In order to fully exploit the different hierarchical features, we propose residual attention block (RAB) to perform feature refinement which explicitly model relationships between channels. Experiments on five public datasets show that our method can achieve much higher Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) scores than the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.