Abstract

It is known that Hebbian synapses, with appropriate weight normalization, extract the first principal component of the input patterns. Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes. Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultaneous linear equations. During repetitive learning with a simple anti-Hebb rule, the weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No explicit matrix inversion is involved and there is no need to normalize weights. Mathematically, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping. Since solving systems of linear equations is a very basic computational problem to which many other problems are often reduced, our interpretation suggests a potentially general computational role for the anti-Hebbian synapses and a certain type of long-term depression (LTD).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call