Abstract

Super-resolution (SR), which aims at recovering a high-resolution (HR) image from single or sequential low-resolution (LR) images, is a widely used technology in image processing. In the field of image SR, convolutional neural networks (CNNs) have attracted increasing attention because of their high-quality performance. However, most CNN-based methods treat each channel-wise feature equally, which lacks discriminative learning ability across feature channels. Furthermore, many methods neglect to fully use information of each convolutional layer. To resolve these problems, we propose a remote sensing image SR method named dense channel attention network (DCAN). In our DCAN, a sequence of residual dense channel attention blocks (RDCABs) is cascaded with a densely connected structure. In each RDCAB, we make full use of the information from all convolution layers via densely connected convolutional layers. In addition, RDCABs utilize the channel attention mechanism to adaptively recalibrate channel-wise feature responses by explicitly modeling the interdependencies between the channels. In addition, our DCAN can make full use of the hierarchical features by densely connecting each RDCAB. Finally, to further improve the SR performance, the proposed DCAN is learned in both the pixel and wavelet domains, and a fusion layer is used to fuse the outputs of these two domains. Extensive quantitative and qualitative evaluations verify the superiority of our proposed method over several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call