Abstract

Pan-sharpening of remote sensing images is an effective method to get high spatial resolution multi-spectral (HRMS) images by fusing low spatial resolution multi-spectral (LRMS) images and high spatial resolution panchromatic (PAN) images. Recently, many remote sensing images pan-sharpening methods based on convolutional neural networks (CNN) have been proposed and achieved excellent performance. However, two drawbacks still exist. On the one hand, since there are no ideal HRMS images as targets for learning, most existing methods require an extra effort to produce the simulated data for training. On the other hand, these methods ignore the local features of the original images. To address these issues, we propose an unsupervised multi-scale generative adversarial network method, which can train directly on the full-resolution images without down-sampling. Firstly, a multi-scale dense generator network is proposed to extract features from the original images to generate HRMS images. Secondly, two discriminators are used to protect the spectral information of LRMS images and spatial information of PAN images, respectively. Finally, to improve the quality of the fused image and implement training under the unsupervised setting, a new loss function is proposed. Experimental results based on QuickBird and GaoFen-2 data sets demonstrate that the proposed method can obtain much better fusion results for the full-resolution images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call