Abstract

The Sparse, Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type, neural-network models. A mathematical framework for comparing the two models is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independent of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. The stored information is proportional to the number of connections, and it is shown that this proportionality constant is the same for the SDM, the Hopfield model, and higher-order models. The models are also compared in their ability to store and recall temporal sequences of patterns. The SDM also includes time delays so that contextual information can be used to recover sequences. A generalization of the SDM allows storage of correlated patterns.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.