Abstract

Recently, single image super-resolution (SISR) has been widely applied in the field of remote sensing image processing and obtained remarkable performance. However, existing CNN-based remote sensing image super-resolution methods are unable to exploit shallow visual characteristics at global receptive fields, which results in the limited perceptual capability of these models. Furthermore, the low-resolution inputs and features contain abundant low-frequency information, which are weighed in channels and space equally, hence limiting the representational ability of CNNs. To solve these problems, we propose a non-locally up-down convolutional attention network (NLASR) for remote sensing image super-resolution. First, a non-local features enhancement module (NLEB) is constructed to obtain the spatial context information of high-dimensional feature maps, which allows our network to utilize global information to enhance low-level similar structured texture information with effect, overcoming the defects of deficiency perceptual ability of shallow convolutional layers. Second, an enhanced up-sampling channel-wise attention (EUCA) module and enhanced down-sampling spatial-wise attention (EDSA) module are proposed to weight the features at multiple scales. By integrating the channel-wise and multi-scale spatial information, the attention modules are able to compute the response values from the multi-scale regions of each neuron and then establish the accurate mapping from low to high resolution space. Extensive experiments on NWPU-RESISC45 and UCMerced-LandUse datasets show that the proposed method can provide state-of-the-art or even better performance in both quantitative and qualitative measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call