PurposeThis study is motivated to provide a theoretically informed, data-driven assessment of the consequences associated with the participation of non-human bots in social accountability movements; specifically, the anti-inequality/anti-corporate #OccupyWallStreet conversation stream on Twitter.Design/methodology/approachA latent Dirichlet allocation (LDA) topic modeling approach as well as XGBoost machine learning algorithms are applied to a dataset of 9.2 million #OccupyWallStreet tweets in order to analyze not only how the speech patterns of bots differ from other participants but also how bot participation impacts the trajectory of the aggregate social accountability conversation stream. The authors consider two research questions: (1) do bots speak differently than non-bots and (2) does bot participation influence the conversation stream.FindingsThe results indicate that bots do speak differently than non-bots and that bots exert both weak form and strong form influence. Bots also steadily become more prevalent. At the same time, the results show that bots also learn from and adapt their speaking patterns to emphasize the topics that are important to non-bots and that non-bots continue to speak about their initial topics.Research limitations/implicationsThese findings help improve understanding of the consequences of bot participation within social media-based democratic dialogic processes. The analyses also raise important questions about the increasing importance of apparently nonhuman actors within different spheres of social life.Originality/valueThe current study is the first, to the authors’ knowledge, that uses a theoretically informed Big Data approach to simultaneously consider the micro details and aggregate consequences of bot participation within social media-based dialogic social accountability processes.
Read full abstract