Abstract
Hebbian learning has been implicated as a possible mechanism in a wide range of learning and memory functions in the brain. A large body of theoretical studies and simulations has investigated its implications in the dynamics of single neuron as well as network models. For example, neural network models have been found to produce meaningful internal states when driven by structured external stimulation. These studies, however, typically lack a notion of a output in the form of a well specified pattern of network activity, corresponding to a relevant functional output. To impose a desired input-output relation, various forms of supervised learning (or at least reinforcement in the form of an external cue) are often invoked. Recently there has been increasing interest in computational models that involve a separation of time scales between relatively fast plasticity rules and considerably slower reinforcement mechanisms. A large majority of these studies focuses on the role of neuromodulators, such as dopamine. Here, we study a training protocol within such a closed loop setup, with the separation of time scales appearing between a fast learning rule and slower synaptic fatigue. Our model is motivated in part by a series of experiments on ex-vivo cultures of neuronal networks [1,2]. Such self-assembled networks are perhaps closest in their topology to the random, recurrent networks underlying typical neural network simulation models and lack the complexity of a whole brain, or even a slice. It is an open question whether ex-vivo cultures of neurons and glia can support learning, and if so, what is their capacity and what mechanisms underlie such phenomena [2]. Here, we study a recurrent network of integrate-and-fire neurons with competitive Hebbian learning (STDP), subject to a learning protocol, in which stimulation is suppressed in response to the onset of a desired output. A local activity-dependent second messenger is used to modulate the level of plasticity. The activity of the network (mediated by external stimulation and reinforcement) directly regulates the second messenger, thus effectively closing the loop. We show how successful learning in these networks depends on the interplay between the network's ability, first, to explore its space of configurations to obtain a desired output, and second, to converge reliably to that configuration in response to the external cues. These results extend the traditional competitive view of Hebbian learning by refining the dependency of the rule to slow (or long-term) input patterns. By explicitly subjecting the network to (i) competitive learning, (ii) explicit reinforcement and (iii) activity-dependent plasticity modulation, meaningful patterns of input-output relations can be learned by the network.
Highlights
Sixteenth Annual Computational Neuroscience Meeting: CNS*2007 William R Holmes Meeting abstracts – A single PDF containing all abstracts in this Supplement is available here http://www.biomedcentral.com/content/pdf/1471-2202-8-S2-info.pdf
Neural network models have been found to produce meaningful internal states when driven by structured external stimulation
Our model is motivated in part by a series of experiments on ex-vivo cultures of neuronal networks [1,2]. Such selfassembled networks are perhaps closest in their topology to the random, recurrent networks underlying typical neural network simulation models and lack the complexity of a whole brain, or even a slice
Summary
Sixteenth Annual Computational Neuroscience Meeting: CNS*2007 William R Holmes Meeting abstracts – A single PDF containing all abstracts in this Supplement is available here http://www.biomedcentral.com/content/pdf/1471-2202-8-S2-info.pdf . Email: Olivier Rochel* - Olivier.Rochel@sophia.inria.fr; Netta Cohen - netta@comp.leeds.ac.uk * Corresponding author from Sixteenth Annual Computational Neuroscience Meeting: CNS*2007 Toronto, Canada. A large body of theoretical studies and simulations has investigated its implications in the dynamics of single neuron as well as network models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.