Abstract

Causal network reconstruction from time series is an emerging topic in many fields of science. Beyond inferring directionality between two time series, the goal of causal network reconstruction or causal discovery is to distinguish direct from indirect dependencies and common drivers among multiple time series. Here, the problem of inferring causal networks including time lags from multivariate time series is recapitulated from the underlying causal assumptions to practical estimation problems. Each aspect is illustrated with simple examples including unobserved variables, sampling issues, determinism, stationarity, nonlinearity, measurement error, and significance testing. The effects of dynamical noise, autocorrelation, and high dimensionality are highlighted in comparison studies of common causal reconstruction methods. Finally, method performance evaluation approaches and criteria are suggested. The article is intended to briefly review and accessibly illustrate the foundations and practical problems of time series-based causal discovery and stimulate further methodological developments.

Highlights

  • VII A, we study the effect of dynamical noise on several common time-series based causal discovery approaches for chaotic systems

  • We turn to the topic of practical estimation where we introduce several common causal discovery methods and discuss their consistency, significance testing, and computational complexity

  • Alternative evaluation metrics that do not depend on a particular significance level but directly on the p-values are the Kullback-Leibler divergence to evaluate whether the p-values are uniformly distributed and the Area Under the Power Curve (AUPC) to evaluate true positives

Read more

Summary

INTRODUCTION

Reconstructing the causal relations behind the phenomena we observe is a fundamental problem in all fields of science. Novel computing hardware today allows efficient processing of massive amounts of data These developments have led to emerging interest in the problem of reconstructing causal networks or causal discovery from observational time series. All we can measure from observational data are statistical dependencies These can be visualized in a graphical model (Lauritzen, 1996) or time series graph (Eichler, 2011) that represents the conditional independence relations among the variables and their time lags (Fig. 1). We focus on time-lagged causal discovery in the framework of conditional independence testing using the assumptions of time-order, Causal Sufficiency, the Causal Markov Condition, and Faithfulness, among others, which are all discussed thoroughly in this paper. The paper is accompanied by a python jupyter notebook on https://github.com/jakobrunge/tigramite to reproduce some of the examples

FROM GRANGER CAUSALITY TO CONDITIONAL INDEPENDENCE
Definition of time series graphs
Separation
ASSUMPTIONS OF CAUSAL DISCOVERY FROM OBSERVATIONAL TIME SERIES
Causal sufficiency
Causal Markov condition
Faithfulness
Instantaneous effects
Stationarity
Dependency type assumptions
Measurement error
PRACTICAL ESTIMATION
Causal discovery algorithms
Optimal causation entropy
PC algorithm
Consistency
Significance testing
Computational complexity
PERFORMANCE EVALUATION CRITERIA
Models
Model diversity
Metrics
Dynamical noise in deterministic chaotic systems
Autocorrelation
Curse of dimensionality
VIII. DISCUSSION AND CONCLUSIONS
Dynamical noise model
Model for examples on autocorrelation and high-dimensionality
Findings
Pre-whitening and block-shuffling
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call