Abstract

Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.

Highlights

  • One of the most important features of the brain is the ability to learn and regenerate an appropriate response to external stimuli

  • Neural Dynamics in the Learning Process We show that our model can learn I/O mappings based on our perspective

  • At t*300, the magnitude of the error reduces to a sufficient extent, i.e., the output dynamics of the neural activity are within the neighborhood e of the target, where the synaptic plasticity changes from the Hebbian rule to the anti-Hebbian rule

Read more

Summary

Introduction

One of the most important features of the brain is the ability to learn and regenerate an appropriate response to external stimuli. By modification of the synaptic strength, output responses to input stimuli are memorized. In supervised learning with multi-layer neural networks [4], inputs are provided as the initial states to an input layer, and the neural activity in the output layer is determined on the basis of the inputs. In this manner, an input determines the initial states of the system, while an output is given by an attractor of the neural activity dynamics. Each output pattern is memorized as an attractor, and this process is often referred to as ‘‘memories as attractors.’’

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call