Abstract

Understanding how the brain learns may lead to machines with human-like intellectual capacities. It was previously proposed that the brain may operate on the principle of predictive coding. However, it is still not well understood how a predictive system could be implemented in the brain. Here we demonstrate that the ability of a single neuron to predict its future activity may provide an effective learning mechanism. Interestingly, this predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons. We show how this mathematically derived learning rule can provide a theoretical connection between diverse types of brain-inspired algorithm, thus offering a step towards the development of a general theory of neuronal learning. We tested this predictive learning rule in neural network simulations and in data recorded from awake animals. Our results also suggest that spontaneous brain activity provides ‘training data’ for neurons to learn to predict cortical dynamics. Thus, the ability of a single neuron to minimize surprise—that is, the difference between actual and expected activity—could be an important missing element to understand computation in the brain.

Highlights

  • Understanding how the brain learns may lead to machines with human-like intellectual capacities

  • As many basic properties of neurons are highly conserved throughout evolution[15,16,17], we suggest that a single neuron using a predictive learning rule could provide an elementary unit from which a variety of predictive brains may be built

  • In the ‘free phase’, a sample stimulus is continuously presented to the input layer and the activity propagates through the network until the dynamics converge to an equilibrium

Read more

Summary

Introduction

Understanding how the brain learns may lead to machines with human-like intellectual capacities. This predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons We show how this mathematically derived learning rule can provide a theoretical connection between diverse types of brain-inspired algorithm, offering a step towards the development of a general theory of neuronal learning. There are two main approaches to investigating learning mechanisms in the brain: (1) experimental, where persistent changes in neuronal activity are induced by a specific intervention[2], and (2) computational, where algorithms are developed to achieve specific computational objectives while still satisfying selected biological constraints[3,4] In this Article we explore an additional option—(3) theoretical derivation—where a learning rule is derived from basic cellular principles, that is, from maximizing the metabolic energy of a cell. The difference between activity in the clamped (ˆx) and free (ˇx) phases is used to modify the synaptic weights (w) according to the equation

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call