The âtoxic turnâ in social media platforms continues unabated. Hate speech, mis- and disinformation, misogynistic and racist speech, images, memes and videos are all far too common on social media platforms and more broadly on the internet. While the diminishing popularity of populist politicians led to hopes for less social toxicity, the Covid-19 pandemic introduced new and more complex dimensions. Tensions have emerged around what constitutes problematic content and who gets to define it. Co-regulation models, such as for example the EC Code of Conduct against Illegal Hate Speech, focus on the legality of certain types of contents, while leaving other categories of problematic contents to be defined by platforms. In parallel, the social media ecosystem became more diverse, as new platforms with hands off moderation policies attracted users who felt too constrained by the policies of mainstream platforms. The proposed panel examines this complex and dynamic landscape by problematizing what is understood as toxic, deplatformed, removable and in general problematic content on platforms with the aim to probe the boundaries of what is constituted as acceptable discourse on platforms and to map its implications. In particular, this panel discusses the broad definition of âproblematic contentâ employed by social media platforms, a catch-all term that cuts across hate speech and propaganda, including more politically topical content such as mal-, mis-, and disinformation, hyperpartisan and polarising content, but also abusive, misogynistic, racist, and homophobic discourse. The term is also employed to refer to spam and content that infringes upon the Terms of Service or the Community Standards of social media platforms. As such, it is a broad category that resists a narrower classification given the operational scope of its use. Defining what constitutes problematic content is a key operation of platform content moderation policies but is also the subject of intense debates (de Gregorio, 2020; Gillespie, 2018; Gillespie et al., 2020; Gorwa et al., 2020). The panel interrogates the many definitions and applications of problematic content on social media platforms and applications through an empirically informed lens and focusing on deleted contents, complex mixed narratives, and grey areas, including hidden misinformation on voice applications. Problematic Content according to Twitter Compliance API presents ongoing work on the Twitter Compliance API and the Compliance Firehose, which allow researchers to identify content that has been deleted, deactivated, protected, or suspended from Twitter, a proxy for problematic content. In Multi-Part Narratives on Telegram Siapera presents ongoing research that probes the intersection between Covid-19 scepticism, far right and other political narratives in vaccine hesitant groups on Telegram. The third contribution, What if Bill Gates really is evil, people? Investigating the infodemicâs grey areas discusses the conceptual and methodological definitions of problematic content in relation to work on anti-vax and other conspiratorial narratives on Instagram and on Twitter. The fourth contribution, Misinformation and other Harmful Content in Third-Party Voice Applications focuses on problematic content that is yet to be identified on voice applications such as personal assistants. The article addresses the methodological challenges of identifying and defining such contents on applications that have currently no content moderation policies. All contributions foreground the difficulties and costs of identifying and dealing with problematic contents on social media. The panel fits with theme of decolonization in two ways: firstly, because it is concerned with the tensions around how toxic/problematic contents are defined and who gets to define them; and secondly, because of its focus on neo-colonial discourses or justifications for colonialism in both narratives hosted by platforms and in platformsâ attempts to regulate content. As some narratives are flagged for removal by social platforms, they also raise the question of who is deciding and what does problematic content mean, with far right discourses exploiting this tension and ironically denouncing any attempt to regulate the public discourse as ideological enforcement and justification for (neo)colonial practices performed by social media platforms. From this perspective, platforms' own claims about what constitutes acceptable content is uncomfortably close to colonial narratives of civilised discourse and brings to the fore the potential for neo-colonial narratives and practices in digital spaces.
Read full abstract