Abstract

ABSTRACT At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call