Abstract

Pan-sharpening in remote sensing image fusion refers to obtaining multi-spectral images of high-resolution by fusing panchromatic images and multi-spectral images of low-resolution. Recently, convolution neural network (CNN)-based pan-sharpening methods have achieved the state-of-the-art performance. Even though, two problems still remain. On the one hand, the existing CNN-based strategies require supervision, where the low-resolution multi-spectral image is obtained by simply blurring and down-sampling the high-resolution one. On the other hand, they typically ignore rich spatial information of panchromatic images. To address these issues, we propose a novel unsupervised framework for pan-sharpening based on a generative adversarial network, termed as Pan-GAN, which does not rely on the so-called ground-truth during network training. In our method, the generator separately establishes the adversarial games with the spectral discriminator and the spatial discriminator, so as to preserve the rich spectral information of multi-spectral images and the spatial information of panchromatic images. Extensive experiments are conducted to demonstrate the effectiveness of the proposed Pan-GAN compared with other state-of-the-art pan-sharpening approaches. Our Pan-GAN has shown promising performance in terms of qualitative visual effects and quantitative evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call