Abstract

The internet provides easy access to a wealth of information that can sometimes be false and harmful. This is most apparent on social media platforms. To combat this, platforms have implemented various methods of content moderation to flag or block content that is inaccurate or violates community standards. This approach has limitations – from the epistemic injustices that might occur due to content moderation practices to the concerns about the legitimacy of these for-profit platforms’ epistemic authority. In this paper, I highlight some of the epistemic challenges of online content moderation with a focus on how it harms internet users and moderators. If we are to moderate content effectively and ethically, we must attend to these challenges. Hence, I map out an epistemic compass for online content moderation that looks to attend to these challenges. I argue for a pluralistic model of content moderation that categorises content online and distributes the task of content moderation between human moderators, automated moderators, and community moderators in a way that plays to the strengths of each content moderation model. My compass is beneficial for two reasons: first, it allows room for the internet to realise its potential as a democratising force for knowledge, and second, it helps minimise the epistemic downsides of relying on profit-driven companies as epistemic authorities.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.