Abstract

We study how the structure of social media networks and the presence of fake news affects the degree of misinformation and polarization in a society. For that, we analyze a dynamic model of opinion exchange in which individuals have imperfect information about the true state of the world and exhibit bounded rationality. Key to the analysis is the presence of internet bots: agents in the network that spread fake news (e.g., a constant flow of biased information). We characterize how agents’ opinions evolve over time and evaluate the determinants of long-run misinformation and polarization in the network. To that end, we construct a synthetic network calibrated to Twitter and simulate the information exchange process over a long horizon to quantify the bots’ ability to spread fake news. A key insight is that significant misinformation and polarization arise in networks in which only 15% of agents believe fake news to be true, indicating that network externality effects are quantitatively important. Higher bot centrality typically increases polarization and lowers misinformation. When one bot is more influential than the other (asymmetric centrality), polarization is reduced but misinformation grows, as opinions become closer the more influential bot’s preferred point. Finally, we show that threshold rules tend to reduce polarization and misinformation. This is because, as long as agents also have access to unbiased sources of information, threshold rules actually limit the influence of bots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call