Abstract

ABSTRACT Many online discussion providers consider using algorithm-based moderation software to support their employees in moderating toxic communication. Such technology is also attractive for public discussion providers, including public administration and public service media. To ensure successful implementation, however, it is crucial that moderators can correctly understand and use the software according to context-specific workplace requirements. This exploratory case study sheds light on the technology acceptance of algorithm-based moderation software by moderators in German public administration and public service media. Specifically, we focus on the moderators’ user characteristics and workplace requirement perceptions as preconditions for technology acceptance. We combined twelve structured qualitative interviews with moderators with an enhanced cognitive walkthrough (ECW) of an algorithm-based moderation dashboard. Additionally, stimuli of two different transparency mechanisms were added to the interview. Our findings suggest that transparency is one of the most requested characteristics of algorithm-based moderation software and, when met, transparency is beneficial for the acceptance of automated content classification in these systems. However, the findings suggest that different AI perceptions and technology commitment among moderators corresponded with different transparency motives related to the moderation system. We assume that addressing those differing motives by different transparency mechanisms may positively affect technology acceptance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call