Abstract

Elementary units characterized by a threshold-linear (graded) response have been argued to model single neurons in auto-associative networks more realistically than binary units. The different way local activity is constrained in the two representations is shown here to have important consequences for the spin-glass-like properties of otherwise equivalent systems. In particular, in contrast with their binary counterparts, the threshold-linear Sherrington-Kirkpatrick model is stable with respect to replica symmetry-breaking (RSB), while threshold-linear fully connected neural networks with covariance learning are RSB unstable only in a very restricted region of their phase diagram. Whether or not spin-glass effects dominate attractor dynamics is suggested to affect considerably, among other things, the ability of auto-associative memories to encode new information.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call