Abstract

Recently, various convolutional neural networks (CNNs) based single image super-resolution (SR) methods have been vigorously explored, and a lot of impressive results have emerged. However, more or less unfortunately, most of the methods mainly focused on increasing the depth of network to improve reconstruction performance. As a matter of fact, deeper depth of network usually means an increase in parameters and computations, or worse still, the increase in parameters or computations often results in the difficulty to train the network. This paper develops a new SR approach called multi-scale residual channel attention network (MSRCAN), which is comparative shallow two-stage neural network structure, and can extract more details to effectively ameliorate the quality of SR. Specifically, a multi-scale residual channel attention block (MSRCAB) is designed to plenarily exploit the image features with convolutional kernels of different sizes. At the same time, a channel attention mechanism is introduced to recalibrate the channel significance of feature mappings adaptively. Furthermore, multiple short skip connections and a long skip connection are presented in each MSRCAB to complement information loss. Moreover, the two-stage design contributes to fully uncover low-level and high-level information. Evaluation on the benchmark data set indicates that the proposed method can rival the state-of-the-art convolutional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call