Abstract
In today’s digital public sphere, individuals have little choice but to participate on online platforms, whose design choices shape what is possible, content policies influence what is permissible, and personalization algorithms determine what is visible. Ensuring that online content moderation is aligned with the public interest has emerged as one of the most pressing challenges for freedom of expression in the twenty-first century. Taking this challenge as its focus, this Article examines the promise and pitfalls of a human rights-based approach to content moderation—with a specific focus on the choices and challenges that online platforms are likely to confront in adhering to their corporate responsibility to respect human rights in this context. The Article examines three dimensions of a human rights-based approach to platform moderation in particular: a substantive dimension, encompassing the alignment of content moderation rules with international human rights law; a process dimension, encompassing the standards of transparency and oversight that platforms should implement as part of their human rights due diligence processes; and a procedural-remedial dimension, encompassing the procedural guarantees and remediation mechanisms that platforms should integrate within their systems of content moderation. The Article concludes by reflecting on some of the limits of the human rights-based approach and cautioning against viewing human rights as a panacea.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.