Abstract

Recently, a very deep convolutional neural network (CNN) demonstrated influential performance in the field of single image super-resolution (SISR). However, most of the CNN-based methods focus on designing deeper and wider network structures alone and do not use the hierarchical and global features in the input image. Therefore, we proposed a residual attention fusion network (RAFN), which is an improved residual fusion (RF) framework, to effectively extract hierarchical features for use in single-image super-resolution. The proposed framework comprises two residual fusion structures composed of several residual and fusion modules, and a continuous memory mechanism is realized by adding a long and short jump connection. The network focuses on learning more effective features. Furthermore, to maximize the power of the RF framework, we introduced global context attention (GCA) module that can model the global context and capture long-distance dependencies. The final RAFN was constructed by applying the proposed RF framework to the GCA blocks. Extensive experiments showed that the proposed network achieved improved performance in the SISR method with fewer parameters, as compared to the methods proposed in previous studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call