Abstract

In this paper, we study self-taught learning for hyperspectral image (HSI) classification. Supervised deep learning methods are currently state of the art for many machine learning problems, but these methods require large quantities of labeled data to be effective. Unfortunately, existing labeled HSI benchmarks are too small to directly train a deep supervised network. Alternatively, we used self-taught learning, which is an unsupervised method to learn feature extracting frameworks from unlabeled hyperspectral imagery. These models learn how to extract generalizable features by training on sufficiently large quantities of unlabeled data that are distinct from the target data set. Once trained, these models can extract features from smaller labeled target data sets. We studied two self-taught learning frameworks for HSI classification. The first is a shallow approach that uses independent component analysis and the second is a three-layer stacked convolutional autoencoder. Our models are applied to the Indian Pines, Salinas Valley, and Pavia University data sets, which were captured by two separate sensors at different altitudes. Despite large variation in scene type, our algorithms achieve state-of-the-art results across all the three data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call