Abstract

The aim of multispectral (MS) and panchromatic (PAN) image fusion is to obtain an MS image that has high resolution in both spectral and spatial domains. During the fusion process, there are two important issues, i.e., spectral information preservation and spatial information enhancement. In this article, we propose a dual-collaborative fusion model that considers not only the spectral correlation collaboration but also the spatial-spectral collaboration. First, the features of PAN and MS images are extracted by a shared feature embedding network. Then, in order to enhance the spatial details, the PAN features are decomposed into four subbands, and the collaborative relationships among subbands are fully explored to refine the features. After the refinement of the subbands, the high-frequency components are directly taken as the inputs of the reconstruction network, while the low-frequency components are transformed by the guidance generation network to accomplish the spatial-spectral collaboration and also make preparations for the spectral adjustment. To explore the spectral correlation collaboration, a novel graph convolutional network is designed for the modulation of intraspectral relationships. Finally, the adjusted MS features are combined with the high-frequency components of PAN features to reconstruct the high-resolution MS image. Experimental results show that the proposed method outperforms traditional state-of-the-art pan-sharpening methods as well as the available deep learning-based ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call