Abstract

This behavior of a constrained linear computing unit is analysed during “Hebbian” learning by gradient descent of a cost function corresponding to the sum of a variance maximization and a weight normalization term. The n-dimensional landscape of this cost function is shown to be composed of one local maximum and of n saddle points plus one global minimum aligned with the principal components of the input patterns. Furthermore, the landscape can be described in terms of hyperspheres, hypercrests, and hypervalleys associated with each of these principal components. Using this description, it is shown that the learning trajectory will converge to the global minimum of the landscape corresponding to the main principal component of the input patterns, provided some conditions on the starting weights and on the learning rate of the descent procedure. Extensions and implications of the algorithm are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call