Abstract

We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.

Highlights

  • In the last decades the problem of fast pattern learning has been intensively studied. Amit and Fusi (1994) introduced a model of auto-associative memory for sparsely coded patterns in fully connected neuronal networks and showed that in this model an ensemble of N neurons can store almost quadratically many patterns before it starts forgetting old ones, even if each pattern is only presented once

  • We present a high-capacity model for one-shot association learning in sparse networks

  • Our model is related to the latter approach, except that we are more restrictive in the retrieval procedure so that the model is still fast-retrieving: we consider a bipartite graph with partite sets A and B, where all edges are directed from A to B (“afferent edges”), and iterative retrieval is only achieved by the edges in B (“recurrent edges”)

Read more

Summary

INTRODUCTION

In the last decades the problem of fast pattern learning has been intensively studied. Amit and Fusi (1994) introduced a model of auto-associative memory for sparsely coded patterns in fully connected neuronal networks and showed that in this model an ensemble of N neurons can store almost quadratically many patterns before it starts forgetting old ones, even if each pattern is only presented once. Our model is related to the latter approach, except that we are more restrictive in the retrieval procedure so that the model is still fast-retrieving (cf Section 4.1): we consider a bipartite graph with partite sets A and B, where all edges are directed from A to B (“afferent edges”), and iterative retrieval is only achieved by the edges in B (“recurrent edges”) (see Figure 1 for the setup) In this respect, a similar retrieval scheme for the Willshaw model has been studied by Knoblauch and Palm (2001), with the difference that they used inhibition to stop the spread of activity after the pattern is activated, and that they use a global feedback scheme for threshold control. Memory capacity depends on the patterns size in a unimodular way, cf. Figure 4, that is the pattern size should neither be too small nor too big

MODEL OVERVIEW AND ASSUMPTIONS
COMPARISON WITH THE MODEL OF AMIT AND FUSI
THEORETICAL RESULTS
Edge probabilities
Degree distribution
Learning without recurrent edges
Learning with recurrent edges: percolation
The optimal plasticity constant
Noise tolerance
EXPERIMENTAL RESULTS
DISCUSSION
PATTERN SIZES AND PLASTICITY

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.