Abstract
<p>Artificial <span style="font-family: 'Times New Roman';">i</span>ntelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention to contextualize and adequately translate the expression leaves open the risk of prior censorship. This is currently under debate in the international arena since, as Artificial Intelligence lacks the capacity to contextualize what it moderates, it is being presented more as a tool for indiscriminate prior censorship than as a moderation aimed at protecting freedom of expression. Therefore, after analyzing international legislation, reports from international organizations and the terms and conditions of Twitter and Facebook, we suggest five proposals to improve algorithmic content moderation. First, we propose that States make their domestic legislation compatible with international standards of freedom of expression. We also urge them to develop public policies consisting of implementing legislation to protect the working conditions of human supervisors of automated content removal decisions. For their part, we believe that social networks should present clear and consistent terms and conditions, adopt internal policies of transparency and accountability about how AI operates in the dissemination and removal of online content and, finally, should conduct prior human rights impact assessments of their AI.</p>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.