Abstract

In hyperspectral remote sensing (HSRS), feature data can potentially become very high dimensional. At the same time, manual labeling of that data is an expensive task. As a consequence of these two factors, one of the core challenges is to perform multi-class classification using only relatively few training data points. In this work, we investigate the classification performance with limited training data. First, we revisit the optimization of the internal parameters of a classifier in the context of limited training data. Second, we report an interesting alternative to parameter optimization: classification performance can also be considerably increased by adding synthetic GMM data to the feature space while using a classifier with unoptimized parameters. Third, we show that using variational expectation maximization, we can achieve a much faster convergence in fitting the GMM on the data. In our experiments, we show that the addition of synthetic samples leads to comparable, and, in some cases, even higher classification performance than that for a properly tuned classifier on limited training data. One advantage of the proposed framework is that the reported performance improvements are achieved by a quite simple model. Another advantage is that this approach is computationally much more efficient than classifier parameter optimization and conventional expectation maximization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call