Abstract

Inferring Unobserved Category Features With Causal Knowledge Bob Rehder ( bob.rehder@nyu.edu ) Department of Psychology, New York University, 6 Washington Place New York, NY 10003 USA Russell C. Burnett ( r-burnett@northwestern.edu ) Department of Psychology, Northwestern University, 2029 Sheridan Road Evanston, IL 60208 USA Abstract One central function of categories is to allow people to in- fer the presence of features that cannot be directly ob- served. Although the effect of observing past category members on such inferences has been considered, the effect of theoretical or causal knowledge about the category has not. We compared the effects of causal laws on feature pre- diction with the effects of the inter-feature correlations that are produced by those laws, and with the effect of exemplar typicality or similarity. Feature predictions were strongly influenced by causal knowledge. However, they were also influenced by similarity, in violation of normative behav- ior as defined by a Bayesian network view of causal reason- ing. Finally, feature predictions were not influenced by the presence of correlations among features in observed cate- gory members, indicating that causal relations versus cor- relations lead to different inferences regarding the presence of unobserved features. When an object has been classified as an instance of a con- cept, knowledge associated with that concept can be brought to bear in reasoning about the features that the object is likely to possess. But what is the nature of that knowledge, and how is it used to make inferences or predictions about unobserved features? Recent research has demonstrated that tasks such as category learning, categorization, and category- based induction are often influenced by the theoretical knowledge that one possesses. This knowledge often takes the form of causal relations between features of a category, and theories have been proposed to account for the effects of such knowledge (Rehder, 1999, 2001; Waldmann, Holyoak, & Fratianne, 1995). In this article we assess the effect of causal relations on feature inferences, and in the first of the following sections we present a formal model of causal knowledge and its predictions regarding feature inferences. Of course, another form of knowledge that may guide fea- ture inference is empirical information derived from the first- hand observation of category members. Prior research sug- gests two likely effects of such empirical knowledge on fea- ture prediction. First, feature predictions will often be influ- enced by the overall similarity to the category of the exem- plar with the unobserved feature. In the second section we discuss this predicted effect of similarity and show how it can run directly counter to the predictions of our formal model of causal knowledge. Second, the presence of correla- tions among category features may also allow one to infer the presence of a feature given knowledge about the presence of one or more other features. We discuss the effects of ob- served inter-feature correlations in the third section, and compare them to the effects produced by direct knowledge of causal relations—relations that were responsible for generat- ing the feature correlations in the first place. Feature Inference via Causal Reasoning It is clear that causal knowledge has predictive value. For example, given knowledge of the causes of fire, one can predict, with some certainty, that a flame will appear when a match is struck, oxygen is present, and so on. Likewise, given the causal relations that hold among features of an object, the presence of an unobserved feature can be inferred by reasoning about the causes of that feature and whether those causes are present in the object at hand. In this article we provide direct evidence of causal reason- ing in feature inference, and we test a well specified theory about how this sort of reasoning might be done. This theory involves Bayesian networks—graphs in which variables are represented as nodes, and causal relations between the vari- ables as directed links between the nodes. Figure 1 shows a simple Bayesian network in which three effect variables are dependent on a single cause variable. Rules by which inferences can be drawn from Bayesian networks have been well developed in artificial intelligence. One important rule is the causal Markov condition, which states that a variable X is independent of all variables that are not themselves descendents of X given knowledge about the state of X’s (immediate) parents (Pearl, 1988). In Figure 1, for example, the state of F 2 is independent of F 3 and F 4 given knowledge about F 1 . It has been proposed that Bayesian networks are good psy- chological models of causal knowledge—and, in particular, of the causal knowledge associated with object concepts F 2 F 1 F 3 F 4 Figure 1. A common-cause causal schema.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call