Abstract

In the literature of remote sensing, deep models with multiple layers have demonstrated their potentials in learning the abstract and invariant features for better representation and classification of hyperspectral images. The usual supervised deep models, such as convolutional neural networks, need a large number of labeled training samples to learn their model parameters. However, the real-world hyperspectral image classification task provides only a limited number of training samples. This paper adopts another popular deep model, i.e., deep belief networks (DBNs), to deal with this problem. The DBNs allow unsupervised pretraining over unlabeled samples at first and then a supervised fine-tuning over labeled samples. But the usual pretraining and fine-tuning method would make many hidden units in the learned DBNs tend to behave very similarly or perform as “dead” (never responding) or “potential over-tolerant” (always responding) latent factors. These results could negatively affect description ability and thus classification performance of DBNs. To further improve DBN’s performance, this paper develops a new diversified DBN through regularizing pretraining and fine-tuning procedures by a diversity promoting prior over latent factors. Moreover, the regularized pretraining and fine-tuning can be efficiently implemented through usual recursive greedy and back-propagation learning framework. The experiments over real-world hyperspectral images demonstrated that the diversity promoting prior in both pretraining and fine-tuning procedure lead to the learned DBNs with more diverse latent factors, which directly make the diversified DBNs obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call