Abstract

Recently, deep learning methods have attracted much attention in the field of polarimetric synthetic aperture radar (PolSAR) data interpretation and understanding. However, for supervised methods, it requires large-scale labeled data to achieve better performance, and getting enough labeled data is a time-consuming and laborious task. Aiming to obtain a good classification result with limited labeled data, we focus on learning discriminative high-level features between multiple representations, which we call mutual information. As PolSAR data have multi-modal representations, there should have strong similarity between multi-modal features of the same pixel. In addition, each pixel has its own unique geocoding and scattering information. Hence, every pixel has great difference from other pixels in a specific representation space. Based on the above observations, this article proposes a mutual information-based self-supervised learning (MI-SSL) model to learn an implicit representation from unlabeled data. In this article, the self-supervised learning idea is first applied to PolSAR data processing. Furthermore, a reasonable pretext task, which is suitable for PolSAR data, is designed to extract mutual information for classification tasks. Compared with the state-of-the-art classification methods, experimental results on four PolSAR data sets demonstrate that our MI-SSL model produces impressive overall accuracy with fewer labeled data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.