Abstract

Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder–decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder–decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call