Abstract

Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.

Highlights

  • Bayesian inference provides an intuitive framework for how the nervous system could internalize uncertainty about the external environment by optimally combining prior knowledge with information accumulated during exposure to sensory evidence

  • We found that dynamical phenomena emerging from this mapping resembled processes that are thought to underlie learning and memory in cortical microcircuits

  • We first identify the synaptic and nonsynaptic correlates of this extension by studying ensuing spike dynamics accompanying the individual assumptions of the derivation, and the functionally distinct computations are considered together in a network setting where we demonstrate a simple Bayesian inference task performed by spiking neurons

Read more

Summary

Introduction

Bayesian inference provides an intuitive framework for how the nervous system could internalize uncertainty about the external environment by optimally combining prior knowledge with information accumulated during exposure to sensory evidence. Probabilistic computation has received broad experimental support across psychophysical models describing the perceptual and motor behavior of humans (Wolpert and Körding, 2004; Knill, 2005; Tassinari et al, 2006), it is an open theoretical issue at which level of detail within the neural substrate it should be embedded (Knill and Pouget, 2004). We propose a spike-based extension of the Bayesian Confidence Propagation Neural Network (BCPNN) plasticity rule (Lansner and Ekeberg, 1989; Lansner and Holst, 1996) to address these issues. In this model, storage and retrieval are enabled by gathering statistics about neural input and output activity.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call