Abstract

Dimensionality reduction is a key step in hyperspectral image processing. Recent investigations are looking into nonlinear manifold learning for dimensionality reduction in hyperspectral imagery. Nonlinear manifold learning methods such as Isomap, and Local Linear Embedding are use to recover the low dimensional representation of an unknown nonlinear manifold for high dimensional data where it is important to retain the neighborhood structure of the manifold. Although these algorithms use different philosophies for recovering the nonlinear manifold, they all incorporate neighborhood information from each data point to construct a weighted graph having the data points as vertices. Thus the performance of these methods highly depends on how well these neighborhoods are selected, since all subsequent steps rely on it. The k -NN algorithm is the most widely used technique for neighbor selection in manifold learning. However, it can result in a disconnected graph and it does not fully exploit spatial neighborhood information from the image in selecting points to form the neighborhoods. In this paper, recently proposed methods for constructing the weighted graph in manifold learning are studied: k -VC, k -EC and k -MST, which have the advantage of creating connected graphs and have performed well in artificial data sets. Spatial information of the hyperspectral images is included in the manifold learning process by using spatial coherence. Experiments are conducted with artificial data and hyperspectral images. For hyperspectral images, classification accuracy was used as an indirect measure to understand how the low dimensional embedding of the data reduces dimensionality while still maintaining good discrimination performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call