Violence and Social Media: The Amplification of Antisemitism on X (formerly Twitter) Post Elon Musk’s Acquisition

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The study focuses on the transformation of the Twitter platform following its acquisition by Elon Musk. Through desk research, we examine how cost reductions and the declared support for freedom of speech influence the spread of anti- Semitic posts. We find that content moderation on platform X remains predominantly reactive. Based on a case study, we illustrate a real- life case associated with the spread of extremist speech. The authors also identify key factors contributing to the escalation of antisemitic narratives on Platform X.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.47611/jsrhs.v12i3.4637
Analyzing Twitter Data to Understand Stigmatization of Schizophrenia Before and After Elon Musk
  • Aug 31, 2023
  • Journal of Student Research
  • Melinda Mo + 1 more

Stigmatization of mental health has become an increasingly prevalent issue in recent years, particularly on social media. The perpetuation of online stigma has significant negative impact on those with schizophrenia, affecting their social lives, self-esteem, ability to succeed in treatment, and more. One major factor that may affect stigmatization on social media is how content moderation is perceived by the users of the platform, as well as the social norms surrounding acceptable discussions on said platform. This relationship has not yet been examined in the context of schizophrenia stigma on Twitter. Elon Musk’s recent acquisition of Twitter has provided an opportunity to do just this, as his public statements and goals for the platform have suggested increased “freedom of speech” and decreased restrictions on content posted, changing how Twitter users perceive allowed conversations. The current study analyzed discussions of schizophrenia on Twitter before and after Elon Musk’s acquisition, coding individual Tweets based on the extent to which they indicated a stigmatizing attitude towards schizophrenia. Main findings include a marginally significant positive association between schizophrenia stigmatization and Musk’s acquisition of Twitter, with an increase in stigmatizing attitudes. Further, in agreement with previous literature on this topic, this study reveals that the stigma of schizophrenia is widespread on Twitter both prior and following Musk’s acquisition. The results of the study may be useful in guiding social networking companies and advocacy efforts to create programs or restrictions that counters stigmatization and further protects those with schizophrenia.

  • Research Article
  • Cite Count Icon 84
  • 10.1002/poi3.198
Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision‐Making Systems
  • Jan 24, 2019
  • Policy & Internet
  • Ben Wagner

Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. This article defines “quasi‐automation” as inclusion of humans as a basic rubber‐stamping mechanism in an otherwise completely automated decision‐making system. Three cases of quasi‐automation are examined, where human agency in decision making is currently debatable: self‐driving cars, border searches based on passenger name records, and content moderation on social media. While there are specific regulatory mechanisms for purely automated decision making, these regulatory mechanisms do not apply if human beings are (rubber‐stamping) automated decisions. More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both. This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio‐technical decision making. The article concludes by proposing criteria to ensure meaningful agency when humans are included in automated decision‐making systems, and relates this to the ongoing debate on enabling human rights in Internet infrastructure.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/assp57481.2022.00022
Content Moderation in Social Media: The Characteristics, Degree, and Efficiency of User Engagement
  • Dec 1, 2022
  • Kanlun Wang + 3 more

Social media emerge as common platforms for knowledge sharing/exchange in online communities. Meanwhile, they also become a hotbed for the diffusion of misinformation. Content moderation is one of the measures for preventing the distribution of misinformation. Despite the increasing research attention to content moderation, the role of user engagement in content moderation remains significantly understudied. It is unclear how different characteristics and degrees of user engagement in social media might impact the performance of content moderation. In addition, the efficiency of content moderation has not been addressed by prior studies. This study aims to fill these research gaps by investigating the characteristics of user engagement behavior in social media and developing automated models to support content moderation that leverage a state-of-the-art pre-trained model for text analysis. The evaluation results with Reddit data suggest that the directivity and temporal characteristics of user engagement have significant effects on the effectiveness of content moderation. Additionally, leveraging the entire history of user engagement tends to be inefficient or even impractical, yet our findings provide evidence and a guide for improving the efficiency of content moderation using user engagement data without compromising model effectiveness. Our findings have research and practical implications for the moderation and deterrence of misinformation in social media.

  • Research Article
  • 10.6001/fil-soc.2024.35.1.9
New (Digital) Media in Creative Society: Ethical Issues of Content Moderation
  • Feb 23, 2024
  • Filosofija. Sociologija
  • Salvatore Schinello


 
 
 Digitalisation and platformisation are continuously impacting and reshaping the societies we live in. In this context, we are witnessing the rise of phenomena such as fake news, hate speech, and the sharing of any other illegal content through social media. In this paper, I propose some ethical reflections on content moderation in the context of digital (social) media, as this topic seems – to me – to already incorporate other relevant digital issues in it, such as algorithms bias, the spread of fake news, and the potential misuses of artificial intelligence. In the first section, I will provide a few hermeneutic reflections over a speech given by the Italian scholar Umberto Eco, which appears to underline the necessity of a content moderation in an era of digital (social) media. In the second section, I will analyse, through a consequentialist perspective, critical and ethical issues posed by content moderation. In particular, I suggest the idea of a ‘moderate’ (reasonable and limited) content moderation that can only be assured by humans, as they are able to contextualise the content, to take emotions and subjective elements into account, to apply critical thinking and adaptability in complex circumstances.
 
 

  • Research Article
  • 10.1080/19331681.2025.2607035
Pakistan’s content moderation paradox: combating violent radicalism in a competitive authoritarian regime
  • Dec 22, 2025
  • Journal of Information Technology & Politics
  • Muhammad Akram + 1 more

In Pakistan, social media has been (ab)used by the competitive authoritarian regime to nurture their religious or political propaganda and oppress dissent, which has complicated the dynamics of radicalism to violent extremism in the country. Overtly, the state has been tempted to regulate the content on social media, but its effectiveness remains a question, considering its politically biased application. Amidst political fragility, this study aimed to understand the applicability of content moderation to prevent or counter radical and extremist narratives in Pakistan’s social media space. In-depth interviews were conducted with social media activists focused on the country’s societal issues. It’s the first study in Pakistan in which social media activists were engaged and specifically focused on content moderation concerning violent extremism. Findings revealed that in the face of limited awareness about content moderation, the current moderation laws are not comprehensive enough to prevent the spread of radical narratives online but are misused to serve the country’s competitive authoritarian regime. Social media activists are concerned that the existing content moderation laws are tools to oppress political dissent at the local or national levels. This study not only recommends extremism audits of existing content moderation policies in Pakistan but also calls for their independent and unbiased application.

  • Research Article
  • Cite Count Icon 10
  • 10.1108/aaaj-11-2022-6119
Content moderation on social media: constructing accountability in the digital space
  • May 15, 2023
  • Accounting, Auditing & Accountability Journal
  • Conor Clune + 1 more

PurposeThe paper examines the content moderation practices and related public disclosures of the World's most popular social media organizations (SMOs). It seeks to understand how content moderation operates as a process of accountability to shape and inform how users (inter)act on social media and how SMOs account for these practices.Design/methodology/approachContent analysis of the content moderation practices for selected SMOs was conducted using a range of publicly available data. Drawing on seminal accountability studies and the concepts of hierarchical and holistic accountability, the authors investigate the design and appearance of the systems of accountability that seek to guide how users create and share content on social media.FindingsThe paper unpacks the four-stage process of content moderation enacted by the World's largest SMOs. The findings suggest that while social media accountability may allow SMOs to control the content shared on their platforms, it may struggle to condition user behavior. This argument is built around the limitations the authors found in the way performance expectations are communicated to users, the nature of the dialogue that manifests between SMOs and users who are “held to account”, and the metrics drawn upon to determine the effectiveness of SMOs content moderation activities.Originality/valueThis is the first paper to examine the content moderation practices of the World's largest SMOs. Doing so extends understanding of the forms of accountability that function in the digital space. Crucial future research opportunities are highlighted to provoke and guide debate in this research area of escalating importance.

  • Research Article
  • 10.14267/cjssp.2025.1.1
Social Media, Market Regulation and CEO Influence: Lessons for Market Efficiency
  • Oct 8, 2025
  • Corvinus Journal of Sociology and Social Policy
  • Cunha Antonio M + 2 more

Social media communication has become increasingly influential in the stock market. Platforms such as X (formerly known as Twitter), Facebook, and Reddit serve as channels for corporate CEOs to share information, analysis, and opinions that can influence the stock prices of the companies they manage. The research this paper is based on tested whether it is possible to obtain abnormal stock trading returns by following Elon Musk’s tweets about Tesla. We studied ten years of Elon Musk’s tweets about Tesla, collecting data on 3,158 tweets and 2,420 stock trading days and identifying 33 events. We employed an event study methodology, utilizing the Five-Factor Model and the Capital Asset Pricing Model to estimate Tesla’s daily expected returns and assess the statistical significance of Tesla’s abnormal stock returns following Elon Musk’s tweets. We estimated abnormal returns over the event window and on the event day. We also estimated a logit regression on the ten-year sample period to assess whether the tweets caused aggregate abnormal returns. We conclude that Elon Musk’s tweets did not significantly impact Tesla’s stock price, suggesting that the market is informationally efficient and that, in recent years, it has not been possible to obtain abnormal returns by trading based on these tweets. Our methodological contribution is isolating tweet-related price reactions by excluding pre-event days, which are usually contaminated by fundamental information, and focusing exclusively on the effects of social media on stock price returns. We contribute to the literature on the relationship between social media and market efficiency, offering valuable insights for investors, regulators, and policymakers.

  • Research Article
  • Cite Count Icon 1
  • 10.5204/mcj.2759
A Study in Anxiety of the Dark
  • Apr 27, 2021
  • M/C Journal
  • Toija Cinque

A Study in Anxiety of the Dark

  • Research Article
  • Cite Count Icon 50
  • 10.1177/14614448221109804
(In)visible moderation: A digital ethnography of marginalized users and content moderation on Twitch and Reddit
  • Jul 18, 2022
  • New Media & Society
  • Hibby Thach + 3 more

Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability.

  • Research Article
  • 10.31269/3s4fqf49
Fediverse Blocklists: Moderation in Noncapitalist Social Media
  • Sep 20, 2025
  • tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society
  • Robert W Gehl

Content moderation is a key form of labour on social media. While much of the scholarly attention has been given to paid or voluntary content moderation on corporate social media, this paper draws attention to content moderation on noncapitalist, alternative social media. Specifically, it focuses on the use of shared instance blocklists on the fediverse, a noncentralised network of community-run social media sites. The paper draws on critical analysis of the act of listing, which finds that listing is an administrative and moral act that introduces three problems: lists don’t carry their own selection criteria, they are binary, and they can grow. However, listing also produces knowledge. Drawing on this literature as well as participant observation and interviews, the paper explores how fediverse blocklist developers attempt to mitigate the problems of lists while also generating knowledge about content moderation in noncapitalist social media.

  • Research Article
  • Cite Count Icon 9
  • 10.1177/14614448241228850
An attack on free speech? Examining content moderation, (de-), and (re-) platforming on American right-wing alternative social media
  • Feb 5, 2024
  • New Media & Society
  • Brittany Shaughnessy + 3 more

Contemporary research on social media looks different than it did in the late 2010s, with users facing a high-choice social media environment as new platforms emerge. Subsequently, alt-right sites have experienced a rise in users—sometimes those who have experienced content moderation by traditional social media sites. As such, scholars have investigated the impact of this content moderation (e.g. de-platforming) on users and the content posted on new alt-right platforms. This work seeks to expand extant research through analyzing a survey of Gab, Parler (now defunct), Truth Social, and Rumble users ( N = 427) who have experienced content moderation on other social media sites. While we find that those temporarily or permanently banned from traditional sites are unlikely to leave the platform altogether for a right-wing alternative social media (RWASM) site, there are active users on these sites worth studying.

  • Research Article
  • 10.34190/ecsm.12.1.3379
Evaluating the Different Approaches to Social Media Regulation and Liability
  • May 20, 2025
  • European Conference on Social Media
  • Murdoch Watney

With more than 5,17 billion users, social media is one of the most powerful forces in the world today. Consumers and businesses rely on it for connecting, researching, and communicating. Over the years, social media platforms have evolved into complex landscapes plagued by data privacy breaches, content moderation controversies, and mounting concerns about mental health outcomes. But how do governments and social media companies protect the public safety against risks and threats that social media present, such as misinformation, deep fakes, hate speech, and extremist communication? The discussion explores the different approaches to regulation and liability of social media platforms. Some governments have shifted away from social media platform self-regulation of content moderation to legal regulation. For example, the European Union Digital Services Act and the United Kingdom Online Safety Act provide for the accountability of a social media company for illegal and harmful content on its platform, but the approaches differ. Government control treads a fine line between free speech and censorship, over-regulation that may stifle innovation, and the responsibilities that come with running a platform, public safety and the future of the internet. In the United States (US) free speech is protected under the First Amendment of the Constitution which allows citizens to express themselves without government interference. Since social media companies are private companies, they can decide which speech they wish to host and amplify. Section 230 of the Communications Decency Act provides immunity against liability for user-generated content. In recent years there have been legal disputes regarding the immunity protection and content moderation decisions. Allowing a social media platform to self-regulate may be good for innovation, but social media is now a powerful communication space with billions of voices and some of these voices are illegal or harmful. It may be that some form of government oversight should be in place to protect the public safety. The discussion highlights that governments around the world are increasingly alarmed by the potential for social media platforms to be exploited, and this has resulted in an ongoing struggle between the need for free expression and the imperative to maintain public safety.

  • Research Article
  • Cite Count Icon 2
  • 10.1332/273241721x16647876031174
Automating social media content moderation: implications for governance and labour discretion
  • Nov 1, 2022
  • Work in the Global Economy
  • Sana Ahmad + 1 more

Content moderation is key to platform operations. Given the largely outsourced character of content moderation work and the dynamic character of social media platforms, technology firms have to address the accompanying high degrees of uncertainty and labour indeterminacy. Central to their managerial strategies is the use of automated technology that allows them to organise work by incorporating the social media user activities within the production processes, and control workers for ensuring the accuracy of content moderation decisions. The labour process analysis is informed by two workshops with ten participants at a Berlin-based IT-services firm providing content moderation services to a lead firm based in the USA. The research design combines together the design thinking method and the focus group interview method to examine the worker–machine interaction. The research findings indicate that technical control results in continuous standardising of content moderation work through routinisation of tasks and codification of time. Its combination with bureaucratic control through the supply-side managerial functions aims to ensure the quality service delivery and points to the continued significance of human supervision. Correspondingly, there are two main contributions of our study: first, regarding the governance in content moderation value chains and second, regarding the worker experiences of technical-driven control. On account of the limited resistance observed in the labour process, we conclude that instead of seeing it as the totalisation of technical control, our findings point towards the structural conditions in Germany that restrict migrant workers’ agency.

  • Research Article
  • 10.1080/1369118x.2025.2604666
Content moderation as worker management: digital labour on erotic webcam platforms
  • Jan 7, 2026
  • Information, Communication & Society
  • Rébecca S Franco + 1 more

This paper examines how content moderation operates as de facto labour management on digital platforms. Through a case study of webcam platforms, where the service provided and monetized constitutes sexual content, we demonstrate how content rules function as workplace regulations. On these platforms, webcam performers livestream to clients to sell private shows and/or earn tips. Webcam platforms occupy a unique analytical position at the intersection of gig labour and content creation platforms, enabling direct monetization of content while subjecting performers to intensive control over that content. Using LiveJasmin as a paradigmatic case, we show how detailed content guidelines and moderation operate as workplace rules, governing performer behaviour, labour output and client interactions. Our methodology combines document analysis of platform policies with 17 expert interviews with industry stakeholders and in-depth interviews with 67 webcam performers across three European countries. The findings reveal that platforms exercise granular control over working conditions through content moderation while framing these controls as safety and compliance measures rather than labour management. This enables platforms to avoid accountability towards performers as workers despite creating strict managerial relationships. The stigmatization of sex work has often excluded adult platforms from platform labour discussions, yet they illuminate worker control mechanisms that operate more implicitly across the creator economy. These findings contribute to platform studies by demonstrating how content moderation and labour management operate as unified systems of labour control, with implications for regulatory frameworks that currently separate content governance from platform labour regulation.

  • Research Article
  • Cite Count Icon 6
  • 10.37419/lr.v8.i3.1
Regulatory Goldilocks
  • May 1, 2021
  • Texas A&M Law Review
  • Nina Brown

Social media is a valuable tool that has allowed its users to connect and share ideas in unprecedented ways. But this ease of communication has also opened the door for rampant abuse. Indeed, social networks have become breeding grounds for hate speech, misinformation, terrorist activities, and other harmful content. The COVID-19 pandemic, growing civil unrest, and the polarization of American politics have exacerbated the toxicity in recent months and years. Although social platforms engage in content moderation, the criteria for determining what constitutes harmful content is unclear to both their users and employees tasked with removing it. This lack of transparency has afforded social platforms the flexibility of removing content as it suits them: in the way that best maximizes their profits. But it has also inspired little confidence in social platforms’ ability to solve the problem independently and has left legislators, legal scholars, and the general public calling for a more aggressive— and often a government-led—approach to content moderation. The thorn in any effort to regulate content on social platforms is, of course, the First Amendment. With this in mind, a variety of different options have been suggested to ameliorate harmful content without running afoul of the Constitution. Many legislators have suggested amending or altogether repealing section 230 of the Communications Decency Act. Section 230 is a valuable legal shield that immunizes internet service providers—like social platforms— from liability for the content that users post. This approach would likely reduce the volume of online abuses, but it would also have the practical effect of stifling harmless—and even socially beneficial—dialogue on social media. While there is a clear need for some level of content regulation for social platforms, the risks of government regulation are too great. Yet the current self-regulatory scheme has failed in that it continues to enable an abundance of harmful speech to persist online. This Article explores these models of regulation and suggests a third model: industry self-regulation. Although there is some legal scholarship on social media content moderation, none explore such a model. As this Article will demonstrate, an industry-wide governance model is the optimal solution to reduce harmful speech without hindering the free exchange of ideas on social media.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.