Abstract

Artificial intelligence (AI)-based moderation systems have been increasingly used by social media companies to identify and remove inappropriate user-generated content (e.g., misinformation) on their platforms. Previous research on AI moderation has primarily focused on situational and technological factors in predicting users’ perceptions of it, while little is known about the role of individual characteristics. To bridge this gap, this study examined whether and how familiarity, political ideology, and algorithm acceptance are related to perceptions of AI moderation. By analyzing survey data from a nationally representative panel in the United States (N = 4562), we found that individuals who were more familiar with AI moderation expressed less favorable perceptions of it. Those who identified themselves as liberals were more likely to view AI moderation positively than those who identified themselves as conservatives. The higher the algorithm acceptance, the more favorable the perception. Moreover, trust in AI moderation significantly mediated the relationship between these three individual characteristics (familiarity, political ideology, and algorithm acceptance) and perceptions. The findings enrich the current understanding of user responses to AI moderation and provide practical implications for policymakers and designers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call