Abstract

Memories in neural system are shaped through the interplay of neural and learning dynamics under external inputs. By introducing a simple local learning rule to a neural network, we found that the memory capacity is drastically increased by sequentially repeating the learning steps of input-output mappings. The origin of this enhancement is attributed to the generation of a Psuedo-inverse correlation in the connectivity. This is associated with the emergence of spontaneous activity that intermittently exhibits neural patterns corresponding to embedded memories. Stablization of memories is achieved by a distinct bifurcation from the spontaneous activity under the application of each input.

Highlights

  • How memories are successively embedded into neural dynamics through the interplay between the neural dynamics and learning process is a crucial question in neuroscience

  • We address what kind of neural network emerges during this process and how memories are represented in neural dynamics upon input

  • By studying neural networks that memorize I/O maps, we have shown how repeated learning stabilizes each memorized state and enhances memory capacity via the interplay between neural dynamics and learning

Read more

Summary

INTRODUCTION

Synaptic connections are modified to shape neural dynamics such that the applied stimulus and desired response are adequately represented therein. In conventional models [6,7,8], multiple memories are embedded into corresponding attractors and are generated by a simple learning rule In spite of their success, neural dynamics in these models are often decoupled from those of synapses—synapses are slowly modified according to the desired targets, and the faster neural dynamics of relaxation to memory attractors are studied independently [1,9] (but see [10]). We first introduce a theoretical formulation for a sequential and repeated learning process. We study spontaneous dynamics without input, which were suggested to be involved in computations in neural systems [12,13,14,15,16,17]

Memory capacity
Comparison with other models
Perceptron model
Pseudoinverse model
Hopfield-type model
Decorrelation of inputs and targets through the learning process
Representation of memories
Spontaneous activity
DISCUSSION
Recall performance as N increases
Change in recall performance during learning process
Confining neural states around targets during learning
Dependence of ξJξt on N and α
Response to input mixtures
10. Equivalence of our learning and pseudoinverse rules
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call