Abstract
The problem of individualization is crucial in almost every field of science. Identifying causes of specific observed events is likewise essential for accurate decision making as well as explanation. However, such tasks invoke counterfactual relationships, and are therefore indeterminable from population data. For example, the probability of benefiting from a treatment concerns an individual having a favorable outcome if treated and an unfavorable outcome if untreated; it cannot be estimated from experimental data, even when conditioned on fine-grained features, because we cannot test both possibilities for an individual. Tian and Pearl provided bounds on this and other probabilities of causation using a combination of experimental and observational data. Those bounds, though tight, can be narrowed significantly when structural information is available in the form of a causal model. This added information may provide the power to solve central problems, such as explainable AI, legal responsibility, and personalized medicine, all of which demand counterfactual logic. This paper derives, analyzes, and characterizes these new bounds, and illustrates some of their practical applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.