Abstract

An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain’s sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide “integrated information”, which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational “weakest link” of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational “weakest link” of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information.

Highlights

  • Information theory, which largely measures communication between transmitter-receiver pairs [1], has been key to understanding information transmission in the feedforward paths of the brain’s sensory periphery [2,3,4,5,6,7,8]

  • Traditional information-theoretic measures only quantify communication between pairs of transmitters and receivers, and have been of limited utility in decoding signals in the recurrent networks that dominate the rest of the brain

  • We show that a network partitioning method called “spectral clustering” [23, 24], when applied to correlation matrices of neural time-series data (Fig 1), reliably identifies or approximates the minimum information bipartition” (MIB) of even large systems

Read more

Summary

Introduction

Information theory, which largely measures communication between transmitter-receiver pairs (for e.g. a telephone sender and receiver) [1], has been key to understanding information transmission in the feedforward paths of the brain’s sensory periphery [2,3,4,5,6,7,8]. Traditional information-theoretic measures are of limited utility as soon as signals enter the recurrent networks that form the rest of the brain. That is because these measures are designed to quantify feedforward information flow. No theoretically sound measures were available to quantify and analyze information that is integrated by entire recurrent networks. Recent work in information theory has risen to meet the challenge of quantifying the integration of information across the recurrent networks that bridge spatially distributed brain areas. The intuition can be phrased like this: if you cut a network into disconnected parts, forcing those parts to evolve over time independently of one another, how much less information is carried over time in the network? If we can estimate this difference accurately, we’d have a value—in bits—of how much information is integrated in a network

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call