Abstract

In this paper, a general nearest feature line (NFL) embedding (NFLE) transformation called fuzzy-kernel NFLE (FKNFLE) is proposed for hyperspectral image (HSI) classification in which kernelization and fuzzification are simultaneously considered. Though NFLE has successfully demonstrated its discriminative capability, the non-linear manifold structure cannot be structured more efficiently by linear scatters using the linear NFLE method. According to the proposed scheme, samples were projected into a kernel space and assigned larger weights based on that of their neighbors. The within-class and between-class scatters were calculated using the fuzzy weights, and the best transformation was obtained by maximizing the Fisher criterion in the kernel space. In that way, the kernelized manifold learning preserved the local manifold structure in a Hilbert space as well as the locality of the manifold structure in the reduced low-dimensional space. The proposed method was compared with various state-of-the-art methods to evaluate the performance using three benchmark data sets. Based on the experimental results: the proposed FKNFLE outperformed the other, more conventional methods.

Highlights

  • Dimensionality reduction (DR) in hyperspectral image (HSI) classification is a critical issue during data analysis because most multispectral, hyperspectral, and ultraspectral images generate high-dimensional spectral images with abundant spectral bands and data

  • The kernelization approaches have been proposed for improving the performance of HSI classification

  • Nine hundred training samples of 10 classes in subset IPS-10 were randomly chosen from 9,620 pixels, and the remaining samples were used for testing

Read more

Summary

Introduction

Dimensionality reduction (DR) in hyperspectral image (HSI) classification is a critical issue during data analysis because most multispectral, hyperspectral, and ultraspectral images generate high-dimensional spectral images with abundant spectral bands and data. A number of DR methods have been proposed that can be classified into three categories: linear analysis, manifold learning, and kernelization. Manifold learning methods have been proposed to reveal the local structure of samples. Since LPP presents sample scatter using the relationship between neighbors, the local manifold structure is preserved and the performance is more effective than in the case of the linear analysis methods. The LE algorithm reduces the dimensions of features from a high-dimensional polarimetric manifold space to an intrinsic low-dimensional manifold space Wang and He [7] investigated the LPP for DR in HSI classification. Zhang et al [12] propose a manifold regularized sparse low-rank approximation, which treats the hyperspectral image as a data cube for HSI classification These manifold learning methods all preserve the local structure of samples and improve on the performance of conventional linear analysis methods.

Related Works
Kernelization of LDA
Kernelization of NFLE
Fuzzification of NFLE
Description of Data Sets
A Toy Example
Classification Results
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.