Abstract

This paper presents a generative model method for multispectral image fusion in remote sensing which involves training without supervision. This method eases the supervision of learning and also uses a multi-objective loss function to achieve image fusion. The loss function used incorporates both spectral and spatial distortions. Two discriminators are designed to minimize the spectral and spatial distortions of the generative output. Extensive experimentations are conducted using three public domain datasets. The comparison results across four reduced-resolution and three full-resolution objective metrics show the superiority of the developed method over several recently developed methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call