Abstract

Pratice- and Policy-oriented Abstract Volunteer (human) moderators have been the essential workforce for content moderation to combat growing inappropriate online content. Because volunteer-based content moderation faces challenges in achieving scalable, desirable, and sustainable moderation, many online platforms have started to adopt algorithm-based content moderation tools (bots). However, it is unclear how volunteer moderators react to the bot adoption in terms of their community-policing and -nurturing efforts. Our research collected public moderation records by bots and volunteer moderators from Reddit. Our analysis suggests that bots can augment volunteer moderators. Augmentation results in volunteers shifting their efforts from simple policing work to a broader set of moderations, including policing over subjective rule violations and satisfying the increased needs for community-nurturing activities following the policing actions. This paper has implications for online platform managers looking to scale online activities and explains how volunteers can achieve more effective and sustainable content moderation with the assistance of bots.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.