Abstract

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.

Highlights

  • We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in networks of neurons in the brain

  • We propose that this seeming unreliability of synaptic connections is not a bug, but an important feature

  • It endows networks of neurons with an important experimentally observed but theoretically not understood capability: Automatic compensation for internal and external changes. This perspective of network plasticity requires a new conceptual and mathematical framework, which is provided by this article

Read more

Summary

Introduction

We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in networks of neurons in the brain. That plasticity moves network parameters θ (such as synaptic connections between neurons and synaptic weights) to values θà that are optimal for the current computational function of the network In learning theory, this view is made precise for example as maximum likelihood learning, where model parameters θ are moved to values θà that maximize the fit of the resulting internal model to the inputs x that impinge on the network from its environment (by maximizing the likelihood of these inputs x). The convergence to θà is often assumed to be facilitated by some external regulation of learning rates, that reduces the learning rate when the network approaches an optimal solution This view of network plasticity has been challenged on several grounds. Networks of neurons in the brain are apparently exposed to a multitude of internal and external changes and perturbations, to which they have to respond quickly in order to maintain stable functionality

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.