Abstract

Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons.

Highlights

  • To understand the functioning of a complex system, it is often useful to develop a map of interactions between the system’s components

  • Main findings In this work, we have extended single-bin, single-delay transfer entropy to accommodate a range of delays and message lengths

  • We found that these extensions doubled the rate at which effective connections were correctly identified in a spiking cortical network model

Read more

Summary

Introduction

To understand the functioning of a complex system, it is often useful to develop a map of interactions between the system’s components. This ‘‘network science’’ approach has been applied to a wide variety of systems with great success [1,2,3,4,5]. Interactions between components are classified as physical, functional, or effective [6,7]. Physical connections delimit the ways in which activity could flow within a circuit, whereas effective connections describe the ways in which activity typically flows. Knowledge of effective connectivity may provide insights into how information is typically distributed and recombined in neural circuits. That it is possible to record activity from hundreds of closely spaced neurons at high temporal resolution for several hours at a time [11,12,13,14], it is extremely important to have an accurate and robust measure of effective connectivity between neurons

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call