Abstract

What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.

Highlights

  • The spread of misinformation, on social media, poses an important challenge

  • This is inconsistent with popular motivated reasoning accounts of fake news under which it is predicted that people should discount information that contradicts their political ideology (Kahan 2017)

  • Participants were substantially less likely to consider sharing false headlines tagged with a warning (16.1%) compared with false headlines in the control (29.8%; p < 0.001), and this main effect of the warning was qualified by an interaction with political concordance (p = 0.005): the warning effect was significantly larger for concordant false headlines than for discordant false headlines

Read more

Summary

Introduction

The spread of misinformation, on social media, poses an important challenge. Nowhere are concerns about misinformation more prevalent currently than in the context of politics, where so-called partisan fake news stories—that is, fabricated stories presented as if from legitimate sources—emerged as a major issue during the 2016 U.S presidential election (Lazer et al 2018). These stories largely spread online, and social media sites are under increasing pressure to intervene and curb the problem of fake news. We consider one intuitively compelling and widely implemented approach to fighting fake news: providing information about the veracity of news stories by tagging demonstrably false headlines with warnings. We aim to advance theory regarding perceptions of misinformation broadly while helping to inform policy decisions of social media platforms and their regulators

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call