Abstract

Counterfactual Undoing in Deterministic Causal Reasoning Steven A. Sloman (Steven_Sloman@brown.edu) Department of Cognitive & Linguistic Sciences, Box 1978 Brown University, Providence, RI 02912 USA David A. Lagnado (David_Lagnado@Brown.Edu) Department of Cognitive and Linguistic Sciences, Box 1978 Brown University, Providence, RI 02912 USA Abstract Pearl (2000) offers a formal framework for modeling causal and counterfactual reasoning. By virtue of the way it represents intervention on a causal system, the framework makes predictions about how people reason when asked counterfactual questions about causal relations. Four studies are reported that test the application of the framework to deterministic causal and conditional arguments. The results support the proposed representation of causal arguments, especially when the nature of the counterfactual intervention is made explicit. The results also show that conditional relations are construed in different ways. Introduction Many questions are decided by causal analysis. In the law, issues of negligence concern who caused an outcome and, at least under common law, the determination of guilt requires evidence of a causal chain leading to a crime. Evidence that might increase the probability of guilt (e.g., an accused’s race) is impermissible if it doesn’t support a causal analysis of the crime. Some legal scholars (Lipton, 1992) claim that legal analyses of causality are in no sense special, that causation in the law derives from everyday thinking about causality. Causal analysis is just as prevalent in science, engineering, politics, indeed in every domain that involves human prediction and control. Causal analysis is often difficult because it depends not only on what happened, but also on what might have happened (Mackie, 1974). Thus the claim that A caused B will often imply that if A had not occurred, then B would not have occurred. Likewise, the fact that B would not have occurred if A had not often suggests that A caused B. This explains a fundamental law of experimental science: Mere observation can only reveal a correlation, not a causal relation. That’s why causal induction requires manipulation, control over an independent variable such that changes in its value will determine the value of the dependent variable whilst holding other relevant conditions constant. Everyday causal induction has these same requirements. Causal inductions in everyday contexts are aided by manipulation of potential causes, by people intervening on the world rather than just observing it (the conditions favoring intervention are spelled out in Pearl, 2000; Spirtes, Glymour, & Scheines, 1993). If we already have some causal knowledge, then certain causal questions can be answered without actual intervention. Some of those questions can be answered through mental intervention, by imagining a counterfactual situation in which a variable is manipulated and determining the effects of change. People attempt this, for example, whenever they wonder if only... (if only I hadn’t made that stupid comment... If only my data were different...). Pearl (2000) offers a causal modeling framework that covers such counterfactual reasoning. The framework makes predictions about how people reason when asked counterfactual questions about causal relations. Pearl’s analysis extends to relations of probabilistic causality but this paper is limited to studies of deterministic arguments. Before describing those studies, we briefly review the relevant aspects of Pearl’s analysis. Observation vs. Causation (Seeing vs. Doing) Seeing In general, observation can be represented using the tools of conventional probability. The probability of observing an event (say, that a logic gate is working properly) under some circumstance (e.g., the temperature is low) can be represented as the conditional probability that a random variable G, representing the logic gate, is at some level of operation g when temperature T is observed to take some value t: Pr{G = g|T = t} defined as Pr{G = g & T = t} Pr{T = t} Conditional probabilities are symmetric in the sense that, if well-defined, their converses are well-defined too. In fact, given the marginal probabilities of the relevant variables, Bayes’ rule tells us how to evaluate the converse: Pr{T = t|G = g} = Pr{G = g | T = t} Pr{T = t} Pr{G = g} Doing To represent action, Pearl proposes an operator do( • ) that controls both the value of a variable that is manipulated as well as a graph that represents causal dependencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call