Abstract

This paper describes a mathematical framework for describing learning in terms of convergence to a stable equilibrium point of a Hebbian neural network. It implies that network dynamics can be approximated by a system of diffusion equations, which has stable equilibrium if a particular parameter is large enough. The learned state is identified with the stable equilibrium point. The model suggests that the magnitude of the eigenvalues associated with the system linearised about the fixed point should be directly related to the slope of an exponential learning curve.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call