Abstract

Recent research comparing mental models theory and causal Bayes nets for their ability to account for discounting and augmentation inferences in causal conditional reasoning had some limitations. One of the experiments used an ordinal scale and multiple items and analysed the data by subjects and items. This procedure can create a variety of problems that can be resolved by using an appropriate cumulative link function mixed models approach in which items are treated as random effects. Experiment 1 replicated this earlier experiment and analysed the results using appropriate data analytic techniques. Although successfully replicating earlier research, the pattern of results could be explained by a much simpler “shallow encoding” hypothesis. Experiment 2 introduced a manipulation to critically test this hypothesis. The results favoured the causal Bayes nets predictions and not shallow encoding and were not consistent with mental models theory. Experiment 1 provided qualified support for the causal Bayes net approach using appropriate statistics because it also replicated the failure to observe one of the predicted main effects. Experiment 2 discounted one plausible explanation for this failure. While within the limited goals that were set for these experiments they were successful, more research is required to account for the pattern of findings using this paradigm.

Highlights

  • IntroductionConditionals, which are typically rendered in English as if p q (where p is called the antecedent and q is called the consequent), are essential to human inference

  • Conditionals, which are typically rendered in English as if p q, are essential to human inference

  • Using this approach allowed us to quantify, using the Bayes Factor, how much more likely CM, which uniquely predicts a main effect of causal direction, was to have generated the data than mental models theory

Read more

Summary

Introduction

Conditionals, which are typically rendered in English as if p q (where p is called the antecedent and q is called the consequent), are essential to human inference. Conditional sentences are used to express a variety of relations, such as causation (if you turn the key, the car starts), deontic regulations (if you are drinking beer, you must be over 18), and property attribution (if it’s a raven, its black). Conditionals allow us to think hypothetically about what would (or should) happen in the world should certain conditions expressed in the antecedent, p, obtain. Much of what we know about reasoning with conditionals has come from the investigation of causal.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call