Abstract

What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an “implied truth” effect whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (N = 5,271 MTurkers), we find that while warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (N = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines – which removes the ambiguity about whether untagged headlines have not been checked or have been verified – eliminates, and in fact slightly reverses, the implied truth effect. Together, these results undermine theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation – a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call