Abstract

In recent years, researches in remote sensing demonstrated that deep architectures with multiple layers can potentially extract abstract and invariant features for better hyperspectral image classification. Since the usual real-world hyperspectral image classification task cannot provide enough training samples for a supervised deep model, such as convolutional neural networks (CNNs), this work turns to investigate the deep belief networks (DBNs), which allow unsupervised training. The DBN trained over limited training samples usually has many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons), which decrease the DBN’s description ability and thus finally decrease the hyperspectral image classification performance. This work proposes a new diversified DBN through introducing a diversity promoting prior over the latent factors during the DBN pre-training and fine-tuning procedures. The diversity promoting prior in the training procedures will encourage the latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of information and thus increase description ability and classification performance of the diversified DBNs. The proposed method was evaluated over the well-known real-world hyperspectral image dataset. The experiments demonstrate that the diversified DBNs can obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.

Highlights

  • Many popular methods have been developed for hyperspectral image classification in the past several decades

  • The diversity promoting prior will encourage latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of the information

  • Since the support vector machines (SVMs)-Poly is a typical ‘shallow’ classifier, the comparison between the results demonstrated that the deep belief networks (DBNs) representations from the deep learning can benefit the hyperspectral image classification

Read more

Summary

Introduction

Many popular methods have been developed for hyperspectral image classification in the past several decades. Researches in literature of both computer vision and remote sensing demonstrated that deep architectures with more layers can potentially extract abstract and invariant features for better image classification (LeCun et al, 2015). The standard approache to real-world hyperspectral image classification is to select some samples from a given image for classifier training, and use the learned classifier to classify the remaining test samples in the same image (Zhong and Wang, 2010) This means that we usually do not have enough training samples to train the deep models. This problem is more obvious in completely supervised training of large scale of deep models, such as convolutional neural networks (CNNs)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call