Abstract

Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.

Highlights

  • It is widely accepted that neurons do not form a homogeneous population, but that there is large variability between neurons

  • Neurons in the brain show a large diversity of properties, yet traditionally neural network models have often used homogeneous populations of neurons

  • Even if we do not take various forms of plasticity into account, there is a large heterogeneity in the shapes of post-synaptic potentials (PSPs) that converge onto a single neuron, depending on amongst others: the projection site, the number of receptors at the synapse, postsynaptic cell membrane properties ([15,16,17]), the type of neurotransmitter (GABAA, GABAB, glutamate), synapse properties, the local chloride reversal potential and active properties of dendrites [12]

Read more

Summary

Introduction

It is widely accepted that neurons do not form a homogeneous population, but that there is large variability between neurons. Even if we do not take various forms of (short term) plasticity into account, there is a large heterogeneity in the shapes of post-synaptic potentials (PSPs) that converge onto a single neuron (for an overview: [12, 13]), depending on amongst others: the projection site (soma/dendrite: [14]), the number of receptors at the synapse, postsynaptic cell membrane properties ([15,16,17]), the type of neurotransmitter (GABAA, GABAB, glutamate), synapse properties (channel subunits, [18]), the local chloride reversal potential and active properties of dendrites [12] This heterogeneity results in variability of decay times, amplitudes and overall shapes of PSPs. So neural heterogeneity plays an important role both in encoding and in decoding stimuli. Neural variability is not a problem that needs to be solved, but it increases the networks’ versatility of coding

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call