Abstract

Deep Convolutional Neural Networks (CNNs) show encouraging performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. The latest SR method focuses on the design of a deeper network structure. However, deeper network training is usually more difficult. In this paper, we propose a new end-to-end residual attention network (RAN). RAN is composed of a series of residual attention modules. We use two types of attention modules in the network, which can better explore the feature correlation in the channel and spatial dimensions and focus on learning high-frequency information. Experimental results show that our RAN is superior to the most state-of-the-art SR methods in terms of quantitative indicators PSNR and SSIM and visual perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call