Abstract

Hyperspectral images encompass abundant information and provide unique characteristics for material classification. However, the labeling of training samples can be challenging in hyperspectral image classification. To address this problem, this study proposes a framework named flexible Gabor-based superpixel-level unsupervised linear discriminant analysis (FG- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Su</i> ULDA) to extract the most informative and discriminating features for classification. First, a number of 3-D flexible Gabor filters are rigorously designed using an asymmetric sinusoidal wave to sufficiently characterize the spatial–spectral structure in hyperspectral images. Then, an unsupervised linear discriminant analysis strategy guided by the entropy rate superpixel (ERS) segmentation algorithm, called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Su</i> ULDA, is skillfully introduced to reduce the extracted large amount of FG features. The <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Su</i> ULDA method not only boosts the classification capability but also increases the peculiarity of features, with the aid of superpixel information. Finally, the achieved features are imported to the popular support vector machine classifier. The proposed FG- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Su</i> ULDA framework is applied to four real hyperspectral image data sets, and the experiments constantly prove that our FG- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Su</i> ULDA is superior to several state-of-the-art methods in both classification performance and computational efficiency, especially with scarce training samples. The codes of this work are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">http://jiasen.tech/papers/</uri> for the sake of reproducibility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call