Abstract

Online misinformation promotes distrust in science, undermines public health, and may drive civil unrest. During the coronavirus disease 2019 pandemic, Facebook-the world's largest social media company-began to remove vaccine misinformation as a matter of policy. We evaluated the efficacy of these policies using a comparative interrupted time-series design. We found that Facebook removed some antivaccine content, but we did not observe decreases in overall engagement with antivaccine content. Provaccine content was also removed, and antivaccine content became more misinformative, more politically polarized, and more likely to be seen in users' newsfeeds. We explain these findings as a consequence of Facebook's system architecture, which provides substantial flexibility to motivated users who wish to disseminate misinformation through multiple channels. Facebook's architecture may therefore afford antivaccine content producers several means to circumvent the intent of misinformation removal policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call