Abstract

Moderators are at the core of maintaining healthy online communities. For these moderators, who are often volunteers from the community, filtering through content and responding to misbehavior on time has become increasingly challenging as online communities continue to grow. To address such challenges of scale, recent research has looked into designing better tools for moderators of various platforms (e.g. Reddit, Twitch, Facebook, and Twitter). In this paper, we focus on Discord, a platform where communities are typically involved in large, synchronous group chats, creating an environment with a faster pace and a lack of structure compared to previously studied platforms. To tackle the unique challenges presented by Discord, we developed a new human-AI system called ConvEx for exploring online conversations. ConvEx is an AI-augmented version of the standard Discord interface designed to help moderators be proactive in identifying and preventing potential problems. It provides visual embeddings of conversational metrics, such as activity and toxicity levels, and can be extended to visualize other metrics. Through a user study with eight active moderators of Discord servers, we found that ConvEx supported several high-level strategies in monitoring a server and analyzing conversations. ConvEx allowed moderators to obtain a holistic view of activity across multiple channels on the server while guiding their attention towards problematic conversations and messages in a channel, helping them identify important contextual information to obtain reliable information from the AI analysis while also being able to pick up on contextual nuances which the AI missed. We conclude with design considerations for integrating AI into future interfaces for moderating synchronous, unstructured online conversations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call