Abstract
This chapter explains the theory of Principal component analysis (PCA) in detail and presents practical implementation issues along with various application examples. It introduces the mathematical concepts behind PCA such as mean value, covariance, eigenvalues, and eigenvectors. The principal components are ordered in way that the principal components retain information in descending order. In many cases, PCA is implemented using the singular value decomposition (SVD) of the covariance matrix. The standard PCA algorithm can be extended to support nonlinear principal components using nonlinear kernels. SVD is an important tool for factorizing an arbitrary real or complex matrix, with many applications in various research areas, such as signal processing and statistics. PCA can also be performed without eigenanalysis. This can be achieved by using neural networks that can be trained to extract the principal components of the training set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.