Abstract

Unsupervised and semisupervised feature learning has recently emerged as an effective way to reduce the reliance on expensive data collection and annotation for hyperspectral image (HSI) analysis. Existing unsupervised and semisupervised convolutional neural network (CNN)-based HSI classification works still face two challenges: underutilization of pixel-wise multiscale contextual information for feature learning and expensive computational cost, for example, large floating-point operations per seconds (FLOPs), due to the lack of lightweight design. To utilize the unlabeled pixels in the HSIs more efficiently, we propose a self-supervised contrastive efficient asymmetric dilated network (SC-EADNet) for HSI classification. There are two novelties in the SC-EADNet. First, a self-supervised multiscale pixel-wise contextual feature learning model is proposed, which generates multiple patches around each hyperspectral pixel and develops a contrastive learning framework to learn from these patches for HSI classification. Second, a lightweight feature extraction network EADNet, composed of multiple plug-and-play efficient asymmetric dilated convolution (EADC) blocks, is designed and inserted into the contrastive learning framework. The EADC block adopts different dilation rates to capture the spatial information of objects with varying shapes and sizes. Compared with other unsupervised, semisupervised, and supervised learning methods, our SC-EADNet provides competitive classification performance on four hyperspectral datasets, including Indian Pines, Pavia University, Salinas, and Houston 2013, but few FLOPs and fast computational speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call