Abstract

In remote sensing image fusion, pansharpening is a type of remote sensing image fusion method that aims to fuse panchromatic (PAN) images and multispectral (MS) images to produce high-resolution multispectral (HRMS) images. Deep learning based pansharpening technology offers a series of advanced unsupervised algorithms. However, there are several challenges: (1) The existing unsupervised pansharpening methods only consider the fusion of single-scale features; (2) for the fusion of MS and PAN image feature branches, the existing pansharpening methods are implemented directly by concatenation and summation, without paying attention to critical features or suppressing redundant features; (3) the semantic gap in the long skip connections of the network architecture will create unexpected results. In this paper, we design a multiscale unsupervised architecture based on generative adversarial networks (GANs) for remote sensing image pansharpening (Mun-GAN), which consists of a generator and two discriminators. The generator includes a multi-scale feature extractor (MFE), a self-adaptation weighted fusion (SWF) module, and a nest feature aggregation (NFA) module. First, the MFE is utilized to extract multiscale feature information from the input images and to then pass this information to the SWF module for adaptive weight fusion. Then, multiscale features are reconstructed by the NFA module to obtain HRMS images. The two discriminators are spectral and spatial discriminators used against the generator. Moreover, we design a hybrid loss function to aggregate the multiscale spectral and spatial feature information. Compared with other state-of-the-art methods using QuickBird, GaoFen-2 and WorldView-3 images, which demonstrate that the Mun-GAN yields better fusion results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call