Abstract

Existing causal discovery algorithms are often evaluated using two success criteria, one that is typically unachievable and the other which is too weak for practical purposes. The unachievable criterion—uniform consistency—requires that a discovery algorithm identify the correct causal structure at a known sample size. The weak but achievable criterion— pointwise consistency—requires only that one identify the correct causal structure in the limit. We investigate two intermediate success criteria—decidability and progressive solvability—that are stricter than mere consistency but weaker than uniform consistency. To do so, we review several topological theorems characterizing which discovery problems are decidable and/or progressively solvable. These theorems apply to any problem of statistical model selection, but in this paper, we apply the theorems only to selection of causal models. We show, under several common modeling assumptions, that there is no uniformly consistent procedure for identifying the direction of a causal edge, but there are statistical decision procedures and progressive solutions. We focus on linear models in which the error terms are either non-Gaussian or contain no Gaussian components; the latter modeling assumption is novel to this paper. We focus especially on which success criteria remain feasible when confounders are present.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.