Abstract

Many information processing problems can be transformed into some form of eigenvalue or singular value problems. Eigenvalue decomposition (EVD) and singular value decomposition (SVD) are usually used for solving these problems. In this paper, we give an introduction to various neural network implementations and algorithms for principal component analysis (PCA) and its various extensions. PCA is a statistical method that is directly related to EVD and SVD. Minor component analysis (MCA) is a variant of PCA, which is useful for solving total least squares (TLSs) problems. The algorithms are typical unsupervised learning methods. Some other neural network models for feature extraction, such as localized methods, complex-domain methods, generalized EVD, and SVD, are also described. Topics associated with PCA, such as independent component analysis (ICA) and linear discriminant analysis (LDA), are mentioned in passing in the conclusion. These methods are useful in adaptive signal processing, blind signal separation (BSS), pattern recognition, and information compression.

Highlights

  • In information processing such as pattern recognition, data compression and coding, image processing, highresolution spectrum analysis, and adaptive beamforming, feature extraction or feature selection is necessary to deal with the large storage of raw data

  • principal component analysis (PCA) as well as linear discriminant analysis (LDA) achieves the same results for an original data set and its orthonormally transformed version [143]; both methods can be directly implemented in the DCT domain, and the results are exactly the same as that obtained from the spatial domain

  • We have discussed various neural network implementations and algorithms for PCA and its various extensions, including PCA, Minor component analysis (MCA), generalized Eigenvalue decomposition (EVD), constrained PCA, two-dimensional methods, localized methods, complex-domain methods, and singular value decomposition (SVD). These neural network methods provide an advantage over their conventional counterparts in that they are adaptive algorithms and have low computational as well as low storage complexity

Read more

Summary

Introduction

In information processing such as pattern recognition, data compression and coding, image processing, highresolution spectrum analysis, and adaptive beamforming, feature extraction or feature selection is necessary to deal with the large storage of raw data. The Gram-Schmidt orthonormalization (GSO) is suitable for feature selection This is due to the fact that the physically meaningless features in Gram-Schmidt space can be linked back to the same number of variables of the measurement space, resulting in no dimensionality reduction. Principal component analysis (PCA) is a well-known orthogonal transform that is used for dimensionality reduction. Another popular technique for feature extraction is linear discriminant analysis (LDA), known as Fisher’s discriminant analysis [2, 3]. In comparison with the GSO transform, PCA generates each of its features based on the covariance matrix of all the N vectors xi, i = 1, . A brief summary is given in Section 16, and independent component analysis (ICA) and linear discriminant analysis (LDA) are mentioned in passing

Hebbian Learning Rule and Oja’s Learning Rule
Principal Component Analysis
Hebbian Rule-Based Principal Component Analysis
Least Mean Squared Error-Based Principal Component Analysis
Other Optimization-Based Principal Component Analysis
Anti-Hebbian Rule-Based Principal Component Analysis
Nonlinear Principal Component Analysis
Minor Component Analysis
10. Localized Principal Component Analysis
11. Extending to Complex Domain
12. Other Generalizations of PCA
13. Singular Value Decomposition
14. Canonical Correlation Analysis
15. A Simulation Example
16. Summary
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call