Abstract

Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

Highlights

  • The autonomous dynamics of recurrent networks of spiking neurons is an important topic in computational neuroscience

  • Networks of randomly connected excitatory and inhibitory integrate-and-fire (IF) neurons are often used in the study of this problem, because this model is computationally efficient for numerical simulations and even sometimes permits analytical insights

  • The statistics inspected in this work are based on spike-trains, which are defined as sums of Delta functions x(t) = δ(t − ti), (2)

Read more

Summary

Introduction

The autonomous dynamics of recurrent networks of spiking neurons is an important topic in computational neuroscience. One state that lacks obvious collective effects but still can show a statistically rich behavior is the asynchronous state with low or absent cross-correlations among neurons. This state is found in many network models (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Renart et al, 2010; Helias et al, 2014) and in experimental recordings in different brain areas in the awake and attentive animal (Poulet and Petersen, 2008; Harris and Thiele, 2011)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call