Abstract

Remote sensing image fusion (RSIF) can generate an integrated image with high spatial and spectral resolution. The fused remote sensing image is conducive to applications including disaster monitoring, ecological environment investigation, and dynamic monitoring. However, most existing deep learning based RSIF methods require ground truths (or reference images) to train a model, and the acquisition of ground truths is a difficult problem. To address this, we propose a semisupervised RSIF method based on the multiscale conditional generative adversarial networks by combining the multiskip connection and pseudo-Siamese structure. This new method can simultaneously extract the features of panchromatic and multispectral images to fuse them without a ground truth; the adopted multiskip connection contributes to presenting image details. In addition, we propose a composite loss function, which combines the least squares loss, L1 loss, and peak signal-to-noise ratio loss to train the model; the composite loss function can help to retain the spatial details and spectral information of the source images. Moreover, we verify the proposed method by extensive experiments, and the results show that the new method can achieve outstanding performance without relying on the ground truth.

Highlights

  • IMAGE fusion aims to fuse the complementary information in two or more source images obtained by different sensors so that a new comprehensive image can be generated [1][2][3]

  • To ensure that the fused image could retain more spatial details and richer spectral information, we considered preserving the spectral features of the MS image and preserving the spatial features of the PAN image as two separate tasks; these tasks are completed by the proposed dualdiscriminator structure

  • Our experiments show that the proposed method can effectively fuse the spectral information of the MS image and the details of the PAN image, and that the method’s performance is competitive compared with other image fusion methods

Read more

Summary

INTRODUCTION

IMAGE fusion aims to fuse the complementary information in two or more source images obtained by different sensors so that a new comprehensive image can be generated [1][2][3]. The fused remote sensing image is obtained by using inverse transformation based on the new the structural component This kind of method has many problems. Image fusion methods based on MRA can recover the lost spatial information of MS images by the corresponding high-frequency features of PAN images. Some image fusion methods based on GANs have been proposed [26][27][28] and show good performance in image fusion These methods still suffer from serious problems in their training because the ground truth or reference images are lacking in RSIF. The contributions of this work are summarized as follows: a) We propose a novel end-to-end semi-supervised image fusion method that does not need the ground truth image (with high spatial and spectral resolution) and can achieve good image fusion performance.

Remote Sensing Image Fusion Based on Deep Learning
Least Squares Generative Adversarial Network
THE PROPOSED METHOD
Network Structures and Processes
Loss Function
Dataset
Experimental Setup
Comparison Experiments
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call