Abstract

Detection of a causal relationship between two or more sets of data is an important problem across various scientific disciplines. The Granger causality index and its derivatives are important metrics developed and used for this purpose. However, the test statistics based on these metrics ignore the effect of practical measurement impairments such as subsampling, additive noise, and finite sample effects. In this paper, we model the problem of detecting a causal relationship between two time series as a binary hypothesis test with the null and alternate hypotheses corresponding to the absence and presence of a causal relationship, respectively. We derive the distribution of the test statistic under the two hypotheses and show that measurement impairments can lead to suppression of a causal relationship between the signals, as well as false detection of a causal relationship, where there is none. We also use the derived results to propose two alternative test statistics for causality detection. These detectors are analytically tractable, which allows us to design the detection threshold and determine the number of samples required to achieve a given missed detection and false alarm rate. Finally, we validate the derived results using extensive Monte Carlo simulations as well as experiments based on real-world data, and illustrate the dependence of detection performance of the conventional and proposed causality detectors on parameters such as the additive noise variance and the strength of the causal relationship.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call