Abstract

Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment ( n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.