Abstract

Some fear that social bots, automated accounts on online social networks, propagate falsehoods that can harm public opinion formation and democratic decision-making. Empirical research, however, resulted in puzzling findings. On the one hand, the content emitted by bots tends to spread very quickly in the networks. On the other hand, it turned out that bots’ ability to contact human users tends to be very limited. Here we analyze an agent-based model of social influence in networks explaining this inconsistency. We show that bots may be successful in spreading falsehoods not despite their limited direct impact on human users, but because of this limitation. Our model suggests that bots with limited direct impact on humans may be more and not less effective in spreading their views in the social network, because their direct contacts keep exerting influence on users that the bot does not reach directly. Highly active and well-connected bots, in contrast, may have a strong impact on their direct contacts, but these contacts grow too dissimilar from their network neighbors to further spread the bot’s content. To demonstrate this effect, we included bots in Axelrod’s seminal model of the dissemination of cultures and conducted simulation experiments demonstrating the strength of weak bots. A series of sensitivity analyses show that the finding is robust, in particular when the model is tailored to the context of online social networks. We discuss implications for future empirical research and developers of approaches to detect bots and misinformation.

Highlights

  • Since the 2016 US presidential election, there is growing attention for an ancient political weapon: misinformation

  • We show that bots may be successful in spreading falsehoods not despite their limited direct impact on human users, but because of this limitation

  • Social bots have been identified as a potential threat to public opinion formation and democratic decision-making

Read more

Summary

Introduction

Since the 2016 US presidential election, there is growing attention for an ancient political weapon: misinformation. Some may even buy into an unbelievable story because it fits their partisan preoccupation [13], or because individuals communicate faster, more sloppy, and less considerate on online social networks than in other communication contexts [14] While these individual-level explanations certainly contribute an important part to solving the puzzle why bot-emitted fake news seems to have a significant impact on public discourse despite bots’ low network embeddedness, they neglect the complexity arising from the interaction of actors on the local-level of social networks [15]. These users will remain able to influence their friends, pulling them slowly but gradually into the direction of the bot’s opinion This process may take longer, but eventually the bot will have manipulated the beliefs of its direct network neighbors and to a larger extent those of its indirect contacts. The most ingeniously engineered bots are likely the ones who are harder to detect, and as those may have a powerful impact on the spreading of falsehood, attempts to detect these accounts or fact check their content could come in vain

Background
Equilibrium analysis
Analysis of model dynamics
Statistical analysis of relationships
Sensitivity analyses
Unbalanced degree distributions
Summary and discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call