Abstract
Principal component analysis (PCA) plays an important role in various areas. In many applications it is necessary to adaptively compute the principal components of the input data. Over the past several years, there have been numerous neural network approaches to adaptively extract principal components for PCA. One of he most popular learning rules for training a single-layer linear network for principal component extraction is Sanger's generalized Hebbian algorithm (GHA). We have extended the GHA (EGHA) by including a positive-definite symmetric weighting matrix in the representation error-cost function that is used to derive the learning rule to train the network. The EGHA presents the opportunity to place different weighting factors on the principal component representation errors. Specifically, if prior knowledge is available pertaining to the variances of each term of the input vector, this statistical information can be incorporated into the weighting matrix. We have shown that by using a weighted representation error-cost function, where the weighting matrix is diagonal with the reciprocals of the standard deviations of the input on the diagonal, more accurate results can be obtained using the EGHA over the GHA.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.