Abstract

Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay.

Highlights

  • Significant progress has been made showing that cognitive operations result from the generation and transformation of cooperative modes of neural activity (Bressler, 1995, 2002; McIntosh, 1999)

  • We explored how the differences in variability of the source dynamics, estimated at fine and coarse time scales, can be explained, in a statistical sense, by an asymmetry in the amount of information transferred from one source to another

  • SYNTHETIC DATA In the previous section, we considered some empirical aspects of the interplay between sample entropy and transfer entropy in the pairwise relations between the neuromagnetic sources

Read more

Summary

METHODS

The values of the time embedding delay τ are kept equal to 1, measured in data points of a given time series for which sample entropy is to be estimated. Sample entropy can be estimated in terms of the average natural logarithm of conditional probability that two delay vectors (points in a multi-dimensional state-space), which are close in the d-dimensional space (meaning that the distance between them is less than the scale length r), will remain close in the (d + 1)-dimensional space. Multi-scale entropy (MSE) was proposed to estimate sample from the original signal This is performed by averaging the entropy of finite time series at different time scales (Costa et al, data points from the original time series within non-overlapping 2002). To obtained the MSE curve, sample entropy is computed for each coarse-grained time series

INTRODUCTION
INFORMATION TRANSFER
ANALYSIS
CONCLUSION AND DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call