Abstract

Averaged learning subspace methods (ALSM) have the advantage of being easily implemented and appear to outperform in classification problems of hyperspectral images. However, there remain some open and challenging problems, which if addressed, could further improve their performance in terms of classification accuracy. We carried out experiments mainly by using two kinds of improved subspace methods (namely, dynamic and fixed subspace methods), in conjunction with the [0,1] and [-1,+1] normalization methods. We used different performance indicators to support our experimental studies: classification accuracy, computation time, and the stability of the parameter settings. Results are presented for the AVIRIS Indian Pines data set. Experimental analysis showed that the fixed subspace method combined with the [0,1] normalization method yielded higher classification accuracy than other subspace methods. Moreover, ALSMs are easily applied: only two parameters need to be set, and they can be applied directly to hyperspectral data. In addition, they can completely identify training samples in a finite number of iterations.

Highlights

  • Hyperspectral data provide detailed spectral information about ground scenes based on a huge number of channels with narrow contiguous spectral bands

  • We present the dynamic subspace dimension method, which sets each subspace dimension independently in Averaged learning subspace methods (ALSM), and the fixed subspace dimension method, which fixes subspace dimensions for each class as the same value as that used in ALSMs based on two normalization methods

  • Three types of experiments were carried out to determine how the classification accuracy is affected by the subspace dimension, normalization, and learning parameters

Read more

Summary

Introduction

Hyperspectral data provide detailed spectral information about ground scenes based on a huge number of channels with narrow contiguous spectral bands. Hyperspectral data can yield higher classification accuracy and more detailed class taxonomies. This increase of data dimensionality has introduced challenging methodological problems because of the incapacity of common image processing algorithms to deal with such high-volume data sets [2,3]. In the context of supervised classification, the most common problem is the Hughes phenomenon [4], implies that the required number of training samples for supervised classification increases as a function of dimensionality. Commonly used dimensionality reduction methods include feature selection and feature extraction methods [5,6,7,8,9], principal components analysis (PCA) with conventional classification methods [10], Minimum Noise Fraction [11], orthogonal subspace projection classification methods [12], support vector machine (SVM) classifiers [13,14,15,16,17,18], and spectral angle mapper and spectral information divergence methods [19,20]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.