Abstract

Pansharpening aims at fusing a low-resolution multispectral (MS) image and a high-resolution (HR) panchromatic (PAN) image acquired by a satellite to generate an HR MS image. Many deep learning based methods have been developed in the past few years. However, since there are no intended HR MS images as references for learning, almost all of the existing methods downsample the MS and PAN images and regard the original MS images as targets to form a supervised setting for training. These methods may perform well on the down-scaled images; however, they generalize poorly to the full-resolution images. To conquer this problem, we design an unsupervised framework that is able to learn directly from the full-resolution images without any preprocessing. The model is built based on a novel generative multiadversarial network. We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual discriminator to preserve the spectral and spatial information of the inputs when performing fusion. Furthermore, a novel loss function is introduced to facilitate training under the unsupervised setting. Experiments and comparisons with other state-of-the-art methods on GaoFen-2, QuickBird, and WorldView-3 images demonstrate that the proposed method can obtain much better fusion results on the full-resolution images. Code is available. [Online]. Available: https://github.com/zhysora/PGMAN.

Highlights

  • D UE to physical constraints [1], many satellites, such as QuickBird, GaoFen-1, 2, and WorldView I, II, only offer a pair of modalities at the same time: multispectral (MS) images at a low spatial resolution and panchromatic (PAN) images at a high spatial resolution but a low spectral resolution

  • We conduct extensive experiments on three datasets with images collected from GaoFen-2, QuickBird, and WorldView-3 satellites

  • Wald’s protocol has been widely used for assessment of pansharpening methods, in which the original MS and PAN images are spatially degraded before feeding into models, the reducing factor being the ratio between their spatial resolutions, and the original MS images are considered as reference images for comparison

Read more

Summary

Introduction

D UE to physical constraints [1], many satellites, such as QuickBird, GaoFen-1, 2, and WorldView I, II, only offer a pair of modalities at the same time: multispectral (MS) images at a low spatial resolution and panchromatic (PAN) images at a high spatial resolution but a low spectral resolution. Over the past few decades, researchers in the remote sensing community have developed various methods for pansharpening. These methods, to distinguish them from the recently proposed deep learning models, we called them traditional. Manuscript received March 6, 2021; revised May 20, 2021; accepted June 13, 2021. Date of publication June 23, 2021; date of current version July 1, 2021.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call