Abstract

Nonlinear associative memories as realized, e. g., by Hopfield nets are characterized by attractor type dynamics. When fed with a starting pattern they converge to exactly one of the stored patterns which is supposed to be the most similar one. These systems cannot render hypotheses of classification, i.e. render several possible answers to a given classification problem. Inspired by C. von der Malsburg’s correlation theory of brain function we extend conventional neural network architectures by introducing additional dynamical variables, the so-called phases, one for each formal neuron in the net. The phases measure detailed correlations of neural activities neglected in conventional neural network architectures. Using simple selforganizing networks based on feature map algorithms we present an associative memory that actually is capable of forming hypotheses of classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call