Abstract

It is observed that, in most of the CNN-based pansharpening methods, the multispectral (MS) images are taken as the ground truth, and the downsampled panchromatic (Pan) and MS images are taken as the training data. However, the trained models from the downsampled images are not suitable for the pansharpening of the MS images with rich spatial and spectral information at their original spatial resolution. To tackle this problem, a novel iterative network based on spectral and textural loss constrained Generative Adversarial Network (GAN) is proposed for pansharpening. First, instead of directly outputting the fused imagery, the GAN focuses on generating the mean difference image. The input of the GAN is a good initial difference image, which will make the network work better. Second, the coarse-to-fine fusion framework is designed to generate the fused imagery. It uses two optimized discriminators to distinguish the generated images, and performs multi-level fusion processing on PAN and MS images to generate the best pansharpening image in full resolution. Finally, the well-designed loss functions are embedded into both the generator and the discriminators to accurately preserve the fidelity of the fused imagery. We validated our method by the images from QuickBird, GaoFen-2 and WorldView-2 satellites. The experimental results demonstrated that the proposed method obtained a better fusion performance than the state-of-the-art methods in both visual comparison and quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call