Abstract

Although the hyperspectral image (HSI) classification is extensively investigated, this task remains challenging when the number of labeled samples is extremely limited. In this article, we overcome this challenge by using synthetic samples and proposing semisupervised variational generative adversarial networks (GANs). In contrast to conditional GAN (previously used for the generation of HSI samples), the proposed approach has two novel aspects. First, an encoder-decoder network is extended to the semisupervised context using an ensemble prediction technique. Through this technique, our deep generative model can be trained using limited labeled samples (only five samples per class) with a large number of unlabeled samples. Second, we build a collaborative relationship between the generation network and the classification network. This property enables that our model can produce meaningful samples that can contribute to the final classification. The experiments on four benchmark HSI datasets demonstrate that the proposed model can achieve an increase of >10% in overall classification accuracy compared with the baseline model without using the generated sample. We also show that the proposed model can achieve better and more robust performance for HSI classification than other generative methods as well as semisupervised methods, especially when the labeled data are limited.

Highlights

  • H YPERSPECTRAL imaging sensors can capture images with pixels represented as high-dimensional spectral vectors that range from visible spectral to short-wave infrared bands [1]

  • We focus on two main challenges for hyperspectral image (HSI) classification

  • We further explore the potential of using generative adversarial networks (GANs) for HSI classification under the extremely limited labeled condition, and proposed a semisupervised variational GAN to solve the problems mentioned above

Read more

Summary

Introduction

H YPERSPECTRAL imaging sensors can capture images with pixels represented as high-dimensional spectral vectors that range from visible spectral to short-wave infrared bands [1]. CVAEGAN is a generative framework that takes advantage of the combination of VAE and GAN, and applies feature matching objective for conditional adversarial learning to synthesize images for a specific identity. This framework [Fig. 1(a)] contains four parts: 1) an encoder network E to learn the relationship between the latent space and the real image space; 2) a generative network G to synthesize samples; 3) a discriminative network D to distinguish between real and synthesized samples; 4) a classification network C to measure class probabilities for real images.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call