Abstract

Most existing multispectral fusion algorithms often suffer from spectral or spatial information distortion. Driven by this motivation, we propose an edge-guided multispectral (MS) image fusion algorithm. In particular, it combines the advantages of generative adversarial networks and improved fusion frameworks, so the merged image can better preserve the spectral information of the original multispectral image while injecting spatial detail information. Specifically, first, an MS image with more image detail is generated using the generated confrontation network for preliminary reconstruction. The panchromatic image edge information and the antagonistic learning strategy are introduced for the robust multispectral image reconstruction. Then, using the reconstructed MS image and the general component substitution image fusion framework, the whole fusion system of this paper is constructed. An enhancement operator is introduced to inject spatial details. Our extensive dataset evaluations show that our approach performs better in terms of high objective quality and human visual perception than several of the most advanced fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call