Abstract

Supervised machine learning and deep learning methods perform well in hyperspectral image classification. However, hyperspectral images have few labeled samples, which make them difficult to be trained because supervised classification methods rely heavily on sample quantity and quality. Inspired by the idea of self-supervised learning, this article proposes a hyperspectral imagery classification algorithm based on contrast learning, which uses the information of abundant unlabeled samples to alleviate the problem of insufficient label information in hyperspectral data. The algorithm uses a two-stage training strategy. In the first stage, the model is pretrained in the way of self-supervised learning, using a large number of unlabeled samples combined with data enhancement to construct positive and negative sample pairs, and contrastive learning (CL) is carried out. The purpose is to enable the model to make judgments on positive and negative samples. In the second stage, based on the pretrained model, the features of the hyperspectral image are extracted for classification, and a small amount of labeled samples are used to fine-tune the features. Experiments show that the features extracted by self-supervised learning achieved improved results on downstream classification task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call