Abstract

Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

Highlights

  • Neural systems need to integrate, store, and manipulate sensory information before acting upon it

  • Our work shows how the ideas of predictive coding with spikes, first laid out within a Bayesian framework [19,20], can be generalized to design spiking neural networks that implement arbitrary linear dynamical systems

  • Let us consider a linear dynamical system describing the temporal evolution of a vector of J dynamical variables, x~(x1, . . . ,xJ ): x_ ~A xzc(t): ð1Þ

Read more

Summary

Introduction

Neural systems need to integrate, store, and manipulate sensory information before acting upon it. Various neurophysiological and psychophysical experiments have provided examples of how these feats are accomplished in the brain, from the integration of sensory stimuli to decision-making [1], from the short-term storage of information [2] to the generation of movement sequences [3]. A lot of research on neural mechanisms has focused on studying neural networks in the framework of attractor dynamics [4,5,6]. These models generally assume that the system’s state variables are represented by the instantaneous firing rates of neurons. The biophysical sources of noise in individual neurons are insufficient to explain such variability [11,12,13]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call