Abstract

Explanations are central to understanding the causal relationships between entities within the environment. Instead of examining basic heuristics and schemata that inform the acceptance or rejection of scientific explanations, recent studies have predominantly examined complex explanatory models. In the present study, we examined which essential features of explanatory schemata can account for phenomena that are attributed to domain-specific knowledge. In two experiments, participants judged the validity of logical syllogisms and reported confidence in their response. In addition to validity of the explanations, we manipulated whether scientists or people explained an animate or inanimate phenomenon using mechanistic (e.g., force, cause) or intentional explanatory terms (e.g., believes, wants). Results indicate that intentional explanations were generally considered to be less valid than mechanistic explanations and that ‘scientists’ were relatively more reliable sources of information of inanimate phenomena whereas ‘people’ were relatively more reliable sources of information of animate phenomena. Moreover, after controlling for participants’ performance, we found that they expressed greater overconfidence for valid intentional and invalid mechanistic explanations suggesting that the effect of belief-bias is greater in these conditions.

Highlights

  • Our ability to comprehend the quality of scientific evidence is critical to navigating the modern world: whether in terms of assessing the prescriptions of clinicians, determining the likelihood and extent of global warming, the function and output of algorithms and artificial intelligence, or understanding the culpability of an accused criminal

  • By manipulating the animacy of the explanandum, the explanans, as well as the source of the information (e.g., ‘people’ or ‘scientists’), our results revealed that participants’ maintained heuristics based on their prior beliefs which biased their responses

  • Participants appear to hold the strongest beliefs for mechanistic explanations of inanimate phenomena

Read more

Summary

Introduction

Our ability to comprehend the quality of scientific evidence is critical to navigating the modern world: whether in terms of assessing the prescriptions of clinicians, determining the likelihood and extent of global warming, the function and output of algorithms and artificial intelligence, or understanding the culpability of an accused criminal. Disinformation and misinformation led to further confusion over the nature of the virus and influenced national responses during the pandemic (Emmott, 2020; Robins-Early, 2020; Schulte, 2020). By manipulating the animacy of the explanandum (animate or inanimate natural phenomena), the explanans (intentional or mechanistic explanations), as well as the source of the information (e.g., ‘people’ or ‘scientists’), our results revealed that participants’ maintained heuristics based on their prior beliefs which biased their responses. By statistically controlling for participants’ performance when assessing response confidence (e.g., Ziori and Dienes, 2008; Schoenherr and Lacroix, 2020), we examined subjective perception of certainty for intentional and mechanistic explanations, with evidence suggesting that participants experienced more overconfidence for valid intentional explanations and invalid scientific explanations

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.