Abstract
In this article, we propose a novel graph convolutional network (GCN) for pansharpening, defined as GCPNet, which consists of three main modules: the spatial GCN module (SGCN), the spectral band GCN module (BGCN), and the atrous spatial pyramid module (ASPM). Specifically, due to the nature of GCN, the proposed SGCN and BGCN are capable of exploring the long-range relationship between the object and the global state in the spatial and spectral aspects, which benefits pansharpened results and has not been fully investigated before. In addition, the designed ASPM is equipped with multiscale atrous convolutions and learns richer local feature information, so as to cover the objects of different sizes in satellite images. To further enhance the representation of our proposed GCPNet, asynchronous knowledge distillation is introduced to provide compact features by heterogeneous task imitation in a teacher–student paradigm. In the paradigm, the teacher network acts as a variational autoencoder to extract compact features of the ground-truth MS images. The student network, devised for pansharpening, is trained with the assistance of the teacher network to transfer the important information of the expected ground-truth MS images. Extensive experimental results on different satellite datasets demonstrate that our proposed network outperforms the state-of-the-art methods both visually and quantitatively. The source code is released at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/Keyu-Yan/GCPNet</uri> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Geoscience and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.