Abstract

In the last two decades, there has been an explosion of interest in modeling the brain as a network, where nodes correspond variously to brain regions or neurons, and edges correspond to structural or statistical dependencies between them. This kind of network construction, which preserves spatial, or structural, information while collapsing across time, has become broadly known as “network neuroscience.” In this work, we provide an alternative application of network science to neural data: network-based analysis of non-linear time series and review applications of these methods to neural data. Instead of preserving spatial information and collapsing across time, network analysis of time series does the reverse: it collapses spatial information, instead preserving temporally extended dynamics, typically corresponding to evolution through some kind of phase/state-space. This allows researchers to infer a, possibly low-dimensional, “intrinsic manifold” from empirical brain data. We will discuss three methods of constructing networks from nonlinear time series, and how to interpret them in the context of neural data: recurrence networks, visibility networks, and ordinal partition networks. By capturing typically continuous, non-linear dynamics in the form of discrete networks, we show how techniques from network science, non-linear dynamics, and information theory can extract meaningful information distinct from what is normally accessible in standard network neuroscience approaches.

Highlights

  • Over the course of the last decade “network neuroscience” has emerged as a rapidly expanding research paradigm in computational and cognitive neuroscience (Sporns, 2010; Fonito et al, 2016)

  • We explore three ways of constructing a network from time series: 1. Recurrence Networks: Networks which encode the tendency of the system to return to, or dwell in, particular subspaces as it evolves over a continuous manifold

  • A significant benefit of this method is that it allows for simultaneous analysis of individual time series, as well as higher order connectivity patterns using the same general framework: for example, a Visibility networks (VNs) could be used to explore the relationship between the individual Hurst exponents of a pair of time series and their associated connectivity

Read more

Summary

INTRODUCTION

Over the course of the last decade “network neuroscience” has emerged as a rapidly expanding research paradigm in computational and cognitive neuroscience (Sporns, 2010; Fonito et al, 2016). Referred to as “network analysis of time series” (Lacasa et al, 2008, 2015; Donner et al, 2010; Small, 2013; McCullough et al, 2015; Zou et al, 2019), these methods aim to provide a best-of-both-worlds approach: allowing researchers to leverage the considerable power of graph theory and network science to understand neural manifolds without reducing the number of states to the same extent that more well-known standard algorithms do We suggest that these approaches constitute a complementary branch of network neuroscience based on analyzing manifold networks rather than functional or structural connectivity networks.

Ordinal Partition Networks
RECURRENCE NETWORKS
Constructing a Recurrence Network
Analyzing a Recurrence Network
Applications of Recurrence Networks in Neuroscience
VISIBILITY NETWORKS
Constructing a Visibility Network
Analyzing a Visibility Network
Applications in Neuroscience
ORDINAL PARTITION NETWORKS
Constructing an OPN
Analyzing an OPN
SOFTWARE IMPLEMENTATIONS
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call