Abstract

Remote sensing satellites can simultaneously capture high spatial resolution panchromatic (PAN) images and low spatial resolution multispectral (MS) images. Pan-sharpening in the fusion of remote sensing images aims to generate high-resolution MS images by integrating the spatial information of PAN images and the spectral characteristics of MS images. In this study, a novel deep perceptual patch generative adversarial network (FDPPGAN) was proposed to solve the pan-sharpening problem. First, a perception generator was constructed, it included, a matching module, which can process as input images of different resolutions, a fusion module, a reconstruction module based on the residual structure, and a module for the extracting perceptual features. Second, patch discriminator was utilized to convert the dichotomy of the sample into that multiple partial images of the same size to ensure that the generated results can retain more detailed features. Finally, the loss function of FDPPGAN comprised perceptual feature loss, content loss, generator loss, and discriminator loss. Experiments on the QuickBird and WorldView datasets demonstrated that the proposed algorithm is superior to state-of-the-art algorithms in subjective and objective indexes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call