Abstract

Although graph convolutional networks have found application in polarimetric synthetic aperture radar (PolSAR) image classification tasks, the available approaches cannot operate on multiple graphs, which hinders their potential to generalize effective feature representations across different datasets. To overcome this limitation and achieve robust PolSAR image classification, this paper proposes a novel end-to-end cross-level interaction graph U-Net (CLIGUNet), where weighted max-relative spatial convolution is proposed to enable simultaneous learning of latent features from batch input. Moreover, it integrates weighted adjacency matrices, derived from the symmetric revised Wishart distance, to encode polarimetric similarity into weighted max-relative spatial graph convolution. Employing end-to-end trainable residual transformers with multi-head attention, our proposed cross-level interactions enable the decoder to fuse multi-scale graph feature representations, enhancing effective features from various scales through a deep supervision strategy. Additionally, multi-scale dynamic graphs are introduced to expand the receptive field, enabling trainable adjacency matrices with refined connectivity relationships and edge weights within each resolution. Experiments undertaken on real PolSAR datasets show the superiority of our CLIGUNet with respect to state-of-the-art networks in classification accuracy and robustness in handling unknown imagery with similar land covers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call