Abstract

Evaluation of causal reasoning models depends on how well the subjects’ causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant’s responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects’ responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact.

Highlights

  • Researchers in cognitive science have been interested in developing psychological models of human causal reasoning

  • Causal valence is coded as −1 = generative condition and 1 = preventive condition

  • The responses of subjects who were asked a question with diagnostic reasoning direction were significantly higher and more precise than those who were asked a question with predictive reasoning direction (b1 = 0.16, 95% credibility interval (CI) = [0.02, 0.29], odds ratio = 1.17; d1 = 0.36, 95% CI = [0.09, 0.69])

Read more

Summary

Introduction

Researchers in cognitive science have been interested in developing psychological models of human causal reasoning. Several recent studies have compared the accuracies of different models in predicting human causal judgments (e.g., Cheng, 1997; Hattori and Oaksford, 2007; Perales and Shanks, 2007; Lu et al, 2008; Carroll et al, 2013). Results of these comparisons varied across different studies using data from different experiments, with various question formats. The experimental instruction may influence how subjects sample and evaluate the evidence. Matute (1996) found that subjects who were not provided

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call