Abstract

Recently, deep learning has become one of the most popular tools for pansharpening, many relevant methods have been investigated and reflected great performance. However, a non-negligible problem is the absence of ground-truth (GT). A common solution is using degraded images as training input and the original images are employed as GT. The learned mapping between low resolution (LR) and high resolution (HR) is simulated, is not real, which may cause spectral distortion or insufficient spatial texture enhancement of fused images. In order to address the drawback, a novel unsupervised attention pansharpening net (UAP-Net) is proposed. The proposed UAP-Net mainly contains two major components: 1) the deep residual network (DRN) and 2) spatial texture attention block (STAB). The DRN aims to extract spectral features and spatial details features from low-resolution multi-spectral (LRMS) and panchromatic (PAN), and to fuse those features to make them more representative. The designed STAB adopts the high-frequency component of corresponding input PAN as the weight to enhance the spatial details of the residual block output features. Moreover, a new loss function including two spatial losses and two spectral losses are established. The losses are calculated in the spatial domain and the frequency domain, respectively. Experiments on Gaofen-2 and Worldview-2 remote sensing data demonstrate that the proposed UAP-Net could fuse PAN and LRMS images effectively without the help of high-resolution multi-spectral (HRMS). The proposed framework is fully general and can be used for many multisource remote sensing image fusion, and achieves optimal performance in terms of both the subjective visual effect and the quantitative evaluation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call