Abstract

As a typical label-limited task, it is significant and valuable to explore networks that enable to utilize labeled and unlabeled samples simultaneously for synthetic aperture radar (SAR) image scene classification. Graph convolutional network (GCN) is a powerful semisupervised learning paradigm that helps to capture the topological relationships of scenes in SAR images. While the performance is not satisfactory when existing GCNs are directly used for SAR image scene classification with limited labels, because few methods to characterize the nodes and edges for SAR images. To tackle these issues, we propose a contrastive learning-based dual dynamic GCN (DDGCN) for SAR image scene classification. Specifically, we design a novel contrastive loss to capture the structures of views and scenes, and develop a clustering-based contrastive self-supervised learning model for mapping SAR images from pixel space to high-level embedding space, which facilitates the subsequent node representation and message passing in GCNs. Afterward, we propose a multiple features and parameter sharing dual network framework called DDGCN. One network is a dynamic GCN to keep the local consistency and nonlocal dependency of the same scene with the help of a node attention module and a dynamic correlation matrix learning algorithm. The other is a multiscale and multidirectional fully connected network (FCN) to enlarge the discrepancies between different scenes. Finally, the features obtained by the two branches are fused for classification. A series of experiments on synthetic and real SAR images demonstrate that the proposed method achieves consistently better classification performance than the existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.