Abstract

When platforms use algorithms to moderate content, how should researchers understand the impact on moderators and users? Much of the existing literature on this question views moderation as a series of decision-making tasks and evaluates moderation algorithms based on their accuracy. Drawing on literature from the field of platform governance, I argue that content moderation is more than a series of discrete decisions but rather a complex system of rules, mechanism, and procedures. Research must therefore articulate how automated moderation alters the broader regime of governance on a platform. To demonstrate this, I report on the findings of a qualitative study on the Reddit bot AutoModerator, using interviews and trace ethnography. I find that the scale of the bot allows moderators to carefully manage the visibility of content and content moderation on Reddit, fundamentally transforming the basic rules of governance on the platform.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call