Abstract
AbstractGiven the enormous number of posts, major digital social networks, such as Facebook, must rely on artificial intelligence (AI) systems to regulate hate speech. This article explores the risks for free speech that the automated deletion of posts entails and discusses how AI systems can be subjected to human control. In a first step, the article examines the relevance of the individual right to freedom of expression for privately operated Internet platforms. It then highlights the specific risks that arise when AI systems are entrusted with the task of identifying and removing hate speech. The recently passed EU AI Act represents the most ambitious attempt to date to regulate high-risk AI applications. The article examines whether and, if so, to what extent the various forms of human oversight mentioned in the EU AI Act are feasible in the area of hate speech regulation. Three core theses are put forward: First, the deletion of hate speech by AI systems constitutes a high-risk application that requires an extension of the regulatory scope of the EU AI Act. Second, ex-post monitoring is the only feasible kind of human supervision but fails to guarantee full protection of the individual right to freedom of expression. Third, despite this shortcoming, the implementing of ex-post monitoring is necessary and legitimate to curb hate speech on digital social networks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have