Abstract

Despite the success of deep learning algorithms in hyperspectral image (HSI) classification, most deep learning models require a large amount of labeled data to optimize the numerous parameters. However, it is very expensive and time-consuming to collect a lot of labeled HSI samples. To cope with this problem, we propose a cross-domain contrastive learning (XDCL) framework to learn representations of HSIs in an unsupervised manner. We demonstrate that the features that are valuable for category identification are shared across the spectral and spatial domains, while the less useful contents tend to be independent. The XDCL extracts such domain-invariant information with a cross-domain discrimination task, i.e., predicting which two representations of different domains are matched. With this insight, our method learns semantically meaningful HSI representations. We develop a simple method to construct effective signals representing the two domains, respectively. Moreover, we randomly mask the signals to improve their semantic level and encourage the representations to dig out more useful abstract factors. In order to evaluate the representation quality, we use the learned representations to train a linear classifier on three hyperspectral datasets with limited labeled samples. Experimental results demonstrate that our method surpasses the state-of-the-art methods by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call