Abstract

Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.

Highlights

  • We typically think of the brain as some kind of information processing system, albeit mostly without having a strict definition of information processing in mind

  • Time-resolved Graphics Processing Unit (GPU)-based transfer entropy (TE) analysis revealed significant information transfer at the group-level (pvv0:001 corrected for multiple comparison; binomial test under the null hypothesis of the number of occurrences k of a link being B(kDp0,n)-distributed, where p0~0:05 and n~15), that changed over time (Figure 9, panel D and table 2 for reconstructed information transfer delays)

  • Conclusion and further directions We presented an implementation of the ensemble method for TE presented in [55], that uses a GPU to handle computationally most demanding aspects of the analysis

Read more

Summary

Introduction

We typically think of the brain as some kind of information processing system, albeit mostly without having a strict definition of information processing in mind. In efforts dating back to Alan Turing [1] it was shown that any act of information processing can be broken down into the three components of information storage, information transfer, and information modification [1,2,3,4] These components can be identified in theoretical or technical information processing systems, such as ordinary computers, based on the specialized machinery for and the spatial separation of these component functions. In these examples, a separation of the components of information processing via a specialized mathematical formalism seems almost superfluous. TE or other information theoretic functionals are calculated from the random variables’ joint PDFs pXsYt (Xs~ai,Yt~bj) and conditional PDFs pXsDYt (Xs~aiDYt~bj) (with s,t[f1, . . . ,Ng), where

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.