Abstract

ABSTRACT Algorithmic content moderation is becoming a common practice employed by many social media platforms to regulate ‘toxic’ language and to promote democratic public conversations. This paper provides a normative critique of politically liberal assumption of civility embedded in algorithmic moderation, illustrated by Google’s Perspective API. From a radical democratic standpoint, this paper normatively and empirically distinguishes between incivility and intolerance because they have different implications for democratic discourse. The paper recognises the potential political, expressive, and symbolic values of incivility, especially for the socially marginalised. We, therefore, argue against regulation of incivility using AI. There are, however, good reasons to regulate hate speech but it is incumbent upon the users of AI moderation to show that this can be done reliably. The paper emphasises the importance of detecting diverse forms of hate speech that convey intolerant and exclusionary ideas without using explicitly hateful or extremely emotional wording. The paper then empirically evaluates the performance of the current algorithmic moderation to see whether it can discern incivility and intolerance and whether it can detect diverse forms of intolerance. Empirical findings reveal that the current algorithmic moderation does not promote democratic discourse, but rather deters it by silencing the uncivil but pro-democratic voices of the marginalised as well as by failing to detect intolerant messages whose meanings are embedded in nuances and rhetoric. New algorithmic moderation should focus on the reliable and transparent identification of hate speech and be in line with the feminist, anti-racist, and critical theories of democratic discourse.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call