Abstract

ABSTRACT This paper presents a multi-objective deep learning method for the purpose of achieving image fusion or pansharpening in remote sensing. This method uses a Denoising Autoencoder (DA). Two terms are added to the commonly used mean squared error loss function. The first term involves first applying a fractional-order superimposed gradient to the PANchromatic (PAN) image for extracting the high frequency information, and then by considering the difference between the network output and the edge map of the PANchromatic image. The second term involves the difference between the universal image quality index of the low resolution MultiSpectral (MS) image and the network output for each spectral band. These terms allow both the spatial and spectral information to be better preserved in the fused image. Experimental results on three public domain datasets for both low resolution and full resolution cases are reported based on commonly used objective metrics. Compared to the existing methods, the results obtained indicate the developed multi-objective method generates comparable results for the low resolution scenario and superior performance for the full resolution scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call