Abstract

People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

Highlights

  • People constantly seek, generate, and evaluate explanations (Thagard, 1989, 2012; Sloman, 2005; Keil, 2006)

  • LISA’s knowledge representations (LISAese) assumes that explicit propositions are represented in WM and consume finite WM capacity. We suggest that this approach is likely to be too demanding of WM capacity to serve as a general solution to the problem of representing causal relations for the purposes of explanation: Note that P4 and P5 collectively introduce four additional role bindings into each schema; that’s eight additional role bindings that would need to occupy slots in our intrinsically capacitylimited WM

  • We described our progress toward a process model of explanation

Read more

Summary

INTRODUCTION

Generate, and evaluate explanations (Thagard, 1989, 2012; Sloman, 2005; Keil, 2006). The subject could use this prior example as a source analog (Holyoak and Thagard, 1989) with which to reason about the situation involving ministers and Coke, but only if their mental representations of the situations allowed them to tolerate the semantic differences between their friend, the cell phone company, and the cell phone service on the one hand, and ministers, the Coca Cola Corporation, and Coke on the other (Hummel and Holyoak, 1997) These same kinds of flexibility characterize human reasoning using analogies, schemas, and rules (Holyoak and Thagard, 1989, 1995; Falkenhainer, 1990; Hummel and Holyoak, 1997, 2003). In contrast to the kind of serialization that goes on in the case of schema-, rule-, or relation-induction from multiple examples, the serialization required for explanation must be performed in the service of making inferences about a single target (the explanandum) during a single reasoning episode

A PROCESS MODEL OF EXPLANATION
DISCUSSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.