Abstract

ABSTRACT This article explores resistance to coloniality embedded on Instagram’s automated content moderation. With the growing use of artificial intelligence on Instagram’s content moderation, the company is increasingly criticized for the discriminatory effects of its algorithmic systems, which disproportionately punishes content produced by marginalized groups—such as racial and gender minorities—while leaving untouched similar content produced by more privileged groups. Additionally, when confronted with criticism, the company tends to respond with arguments that diminish users’ experiences of discrimination while reclaiming the monopoly over any statement about algorithms on the grounds they have the technical information about them. This article aims to articulate the concepts of “automated coloniality,” “datacentric epistemology” and “parrhesia” to discuss the colonial features of Instagram’s automated content moderation. Building on such framework, a portfolio of Brazilian resistance practices are used to illustrate some of the instances of coloniality within the social network content moderation and how Brazilian users respond to them, affirming their criticism as legitimate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call