Abstract
We propose a fast neural algorithm to perform Principal Component Analysis (PCA) of a set of examples. It is obtained by simplification of current neural learning rules for PCA. First, we use a single binary neuron to extract a given component by a Hebb-type learning rule (Self-Organized Perceptron). This rule rapidly yields the first principal component. Moreover, as the neuron is binary, convergence is easily interpreted in terms of geometry and trajectory. Then successive components are obtained after projection of examples on the subspace which is supplementary to already learnt components. This avoids mixing the components as in, for example, the “Subspace Method” recently proposed by Oja.11 We have tested this approach on a gaussian distribution of examples: the quality of results is identical to that obtained with methods which diagonalize the correlation matrix computed from the set of examples. A variant of the algorithm is also proposed to perform the Singular Value Decomposition (SVD) which “diagonalizes” an asymmetrical matrix. Performances are as satisfactory as for PCA. Complexity and performances of a VLSI implementation are then estimated from the specifications of the neural VLSI developed in our laboratory. Comparison with hardware implementations of non-neural approaches (SVD) seems to favor neatly the neural approach: the expected speed increase is at least two orders of magnitude for a 100-dimensional SVD calculation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.