Pay it Forward? How Target Group and Social Norms Affect Online Bystander Intervention
ABSTRACT Hate speech is pervasive on social media. Because such incidents can harm targets and foster hostile discourses, understanding what shapes bystander intervention is crucial. In two consecutive experimental studies, we examined how two key situational factors impact actual bystander intervention behavior: the reference group of the online hate speech (professional vs. minority group) and the platforms’ communication norm (prosocial supportive vs. anti-social non-supportive). Study 1 (N = 442) revealed that social media users perceive online hate speech targeting politicians to be less threatening than such speech targeting people with migration background or Muslims. Study 2 (N = 970), a 2-day social media simulation, confirmed that hate speech targeting politicians led to less bystander intervention. Additionally, a prosocial communication norm on the simulated platform indirectly encouraged individuals’ actual intervention by reinforcing the perceived social norm for prosocial intervention and, in turn, participants’ sense of personal responsibility.
- Book Chapter
9
- 10.1108/978-1-83982-848-520211052
- Jun 4, 2021
Bystander Apathy and Intervention in the Era of Social Media
- Research Article
9
- 10.1177/14773708231156328
- Mar 13, 2023
- European Journal of Criminology
A large share of online users has already witnessed online hate speech. Because targets tend to interpret such bystanders’ lack of reaction as agreement with the hate speech, bystander intervention in online hate speech is crucial as it can help alleviate negative consequences. Despite evidence regarding online bystander intervention, however, whether bystanders evaluate online hate speech targeting different social groups as equally uncivil and, thereby, equally worthy of intervention remains largely unclear. Thus, we conducted an online experiment systematically varying the type of online hate speech as homophobia, racism, and misogyny. The results demonstrate that, although all three forms were perceived as uncivil, homophobic hate speech was perceived to be less uncivil than hate speech against women. Consequently, misogynist hate speech, compared to homophobic hate speech, increased feelings of personal responsibility and, in turn, boosted willingness to confront.
- Research Article
27
- 10.3389/feduc.2023.1076249
- Apr 6, 2023
- Frontiers in Education
Hate speech, or intentional derogatory expressions about people based on assigned group characteristics, has been studied primarily in online contexts. Less is known about the occurrence of this phenomenon in schools. As it has negative consequences for victims, perpetrators, and those who witness it, it is crucial to characterize the occurrence of offline (i.e., in the school) and online hate speech to describe similarities and differences between these two socialization contexts. The present study aimed to investigate the prevalence of hate speech witnessing, victimization, and perpetration, in a sample of 3,620 7–9th graders (51% self-identified as female) from 42 schools in Germany and Switzerland. We found that 67% of the students witnessed hate speech in their school, and 65% witnessed online hate speech at least once in the past 12 months. Approximately 21% of the students self-identified as offline perpetrators and 33% as offline victims, whereas these percentages were lower for online hate speech (13 and 20%, respectively). In both settings, skin color and origin were the most common group references for hate speech (50% offline and 63% online). Offline hate speech mainly came from classmates (88%), unknown sources (e.g., graffiti; 19%), or teachers (12%), whereas online hate speech mostly came from unknown persons (77%). The most frequent forms of offline hate speech were offensive jokes (94%) and the spread of lies and rumors about the members of a specific social group (84%). Significant differences by country, gender, and migration background were observed. Girls reported more offline victimization experiences, less perpetration, and a greater frequency of witnessing hate speech. This difference was larger in magnitude in the online setting. Students in Switzerland reported being exposed to hate speech more often than students in Germany. Students with a migration background reported higher hate speech victimization based on skin color and origin than students without a migration background. The high prevalence of hate speech highlights the need for school-based prevention programs. Our findings are discussed in terms of the practical implications.
- Research Article
2
- 10.20355/jcie29489
- Jul 11, 2022
- Journal of Contemporary Issues in Education
In this article, we highlight the perspectives of marginalized Canadian youth regarding hate speech on social media. Specifically, our research focus is on the complexity and intersectionality involved in cyber violence, especially in relation to marginalized identities. Twenty-five participants aged 18 to 25 studying at a central Canadian University (from an initial sample of 90 participants) who self-identified as victims of hate speech were invited to share their experiences and narrate their stories. Research results demonstrate that online hate speech is growing in Canada to an extent where it is has become normalized. This has serious implications for the well-being of Canadian youth - both perpetrators and victims of hate speech. The main targets of hate speech on social media in Canada are immigrants and minorities, particularly Muslims. Results show that online hate speech has significant consequences for the lives of Canadian youth. The repercussions for the victim's mental and physical well-being manifest in problems ranging from alienation, identity issues, deterioration of psychological and physical health to cyber and in-person bullying, and much more. The study concludes that while there are definite links between the rise of online hate speech, deterioration of mental and physical health, and increased attacks on immigrants and minorities, not much action has gone into policymaking and education to correct the situation.
- Research Article
10
- 10.17356/ieejsp.v4i4.503
- Jan 16, 2019
- Intersections
Online hate speech, especially on social media platforms, is the subject of both policy and political debate in Europe and globally - from the fragmentation of network publics to echo chambers and bubble phenomena, from networked outrage to networked populism, from trolls and bullies to propaganda and non-linear cyberwarfare. Both researchers and Facebook Community standards see the identification of the potential targets of hateful or antagonistic speech as key to classifying and distinguishing the latter from arguments that represent political viewpoints protected by freedom of expression rights. This research is an exploratory analysis of mentions of targets of hate speech in comments in the context of 106 public Facebook pages in Romanian and Hungarian from January 2015 to December 2017. A total of 1.8 million comments were collected through API interrogation and analyzed using a text-mining niche-dictionaries approach and co-occurrence analysis to reveal connections to events on the media and political agenda and discursive patterns. Findings indicate that in both countries the most prominent targets mentioned are connected to current events on the political and media agenda, that targets are most frequently mentioned in contexts created by politicians and news media, and that discursive patterns in both countries involve the proliferation of similar stereotypes about certain target groups.
- Research Article
22
- 10.1177/14614448221125417
- Oct 8, 2022
- New Media & Society
Most adolescents and young adults frequently encounter hate speech online. Although online bystander intervention is essential to combating such hate, young bystanders may need support with initiating interventions online. Thus, to illuminate the factors of young bystanders’ intervention, we conducted a nationwide, quota-based, quantitative online survey of 1180 young adults in Germany. Among the results, perceived personal responsibility for combating online hate speech positively predicted online bystanders’ direct and indirect intervention. Moreover, frequent exposure to online hate speech was positively associated with bystander intervention, whereas, a perceived threat or low self-efficacy reduced the likelihood of intervention. Also, a greater acceptance of negative consequences and being educated about online hate speech through peers or campaigns all positively predicted some direct and indirect forms of online bystander intervention.
- Book Chapter
8
- 10.1108/978-1-83982-848-520211016
- Jun 4, 2021
Creating the Other in Online Interaction: Othering Online Discourse Theory
- Conference Article
4
- 10.1145/3539597.3572721
- Feb 27, 2023
Social media sites such as Twitter and Facebook have connected billions of people and given the opportunity to the users to share their ideas and opinions instantly. That being said, there are several negative consequences as well such as online harassment, trolling, cyber-bullying, fake news, and hate speech. Out of these, hate speech presents a unique challenge as it is deeply engraved into our society and is often linked with offline violence. Social media platforms rely on human moderators to identify hate speech and take necessary action. However, with the increase in online hate speech, these platforms are turning toward automated hate speech detection and mitigation systems. This shift brings several challenges to the plate, and hence, is an important avenue to explore for the computation social science community.
- Research Article
- 10.36645/mtlr.29.2.coca-cola
- Jan 1, 2023
- Michigan Technology Law Review
Hate speech is a contextual phenomenon. What offends or inflames in one context may differ from what incites violence in a different time, place, and cultural landscape. Theories of hate speech, especially Susan Benesch’s concept of “dangerous speech” (hateful speech that incites violence), have focused on the factors that cut across these paradigms. However, the existing scholarship is narrowly focused on situations of mass violence or societal unrest in America or Europe. This paper discusses how online hate speech may operate differently in a postcolonial context. While hate speech impacts all societies, the global South—Africa in particular—has been sorely understudied. I posit that in postcolonial circumstances, the interaction of multiple cultural contexts and social meanings form concurrent layers of interpretation that are often inaccessible to outsiders. This study expands the concept of online harms by examining the political, social, and cultural dimensions of data-intensive technologies. The paper’s theories are informed by fieldwork that local partners and I conducted in Kasese, Uganda in 2019–2020, focusing on social unrest and lethal violence in the region following the 2016 elections. The research, completed with assistance from the Berkeley Human Rights Clinic, included examining the background and circumstances of the conflict; investigating social media’s role in the conflict; designing a curriculum around hate speech and disinformation for Ugandan audiences; creating a community-sourced lexicon of hateful terms; and incorporating community-based feedback on proposed strategies for mitigating hate speech and disinformation. I begin this with a literature review of legal theory around hate speech, with a particular focus on Africa, and then turn to the legal context around hate speech and social media use in Uganda, examining how the social media landscape fueled past conflicts. Then I explain my Kasese fieldwork and the study’s methodology, before describing initial results. I follow with a discussion of applications to industry, specifically how hate speech is defined and treated by Meta’s Facebook, the dominant social media provider in Kasese. It progresses to a discussion of the implications of the study results and legal and policy recommendations for technology companies stemming from these findings. Importantly, I apply the research findings to expand existing scholarship by proposing a new sixth “hallmark of dangerous speech” to augment Benesch’s paradigm. Adding “calls for geographic exclusion” as a new qualifier for dangerous speech stems from the particular characteristics embodied by postcolonial hate speech. Examples from the Kasese study illustrate how this phenomenon upends platforms’ expectations of hate speech—which may not consider “Coca-Cola bottle” to be an epithet. The application of this new hallmark will create a more inclusive understanding of hate speech in localized contexts. This paper’s conclusions and questions may challenge platforms that must address hate speech and content moderation at a global scope and scale. It will examine the prevalence and role of social media platforms in Africa, and how these platforms have provided resources and engagement with civil society in these regions.
- Research Article
- 10.31703/gdpmr.2023(vi-ii).11
- Jun 30, 2023
- Global Digital & Print Media Review
Hate speech is a complicated concept and there is no locally recognized definition for it in Pakistan. But there are some academic publications and court precedents that tell us about what comes in the ambit of ‘hate speech’. According to them any danger and damage caused by certain forms of expressions have been globally acknowledged in defining hate speech. Our study explores existing research findings by using systematic review analysis on how social media may or may not create an opportunity for online hate speech and which kind of hate speech is mostly being disseminated on social media. A sample of 20 studies out of a total of 50 research papers found in the searches was analyzed that discussed online hate speech from 2015 to 2020. The reviewed studies provide exploratory data about reasons for hate speech happening on social media and how social media make space for hate speech and cyber hate. The findings of this study provide recommendations to counter hate speech on social media.
- Research Article
97
- 10.1609/icwsm.v10i1.14811
- Aug 4, 2021
- Proceedings of the International AAAI Conference on Web and Social Media
Social media systems allow Internet users a congenial platform to freely express their thoughts and opinions. Although this property represents incredible and unique communication opportunities, it also brings along important challenges. Online hate speech is an archetypal example of such challenges. Despite its magnitude and scale, there is a significant gap in understanding the nature of hate speech on social media. In this paper, we provide the first of a kind systematic large scale measurement study of the main targets of hate speech in online social media. To do that, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both these systems. Our results identify online hate speech forms and offer a broader understanding of the phenomenon, providing directions for prevention and detection approaches.
- Research Article
- 10.1177/20563051251325598
- Jan 1, 2025
- Social Media + Society
Hate speech is widespread in digital media, and such incidents can harm individuals and fuel hostile discourses. Therefore, understanding the factors that shape bystander intervention is crucial. Despite frequent calls for more research, there is a need for greater understanding of how perceived political and digital media literacy are related to the frequency of various forms of online bystander intervention, such as counter-speech or reporting. Based on a national online survey of German citizens ( N = 2,691), we investigated how perceived political and digital media literacy of individuals with prior experience in addressing online incivilities ( n = 672) relates to (private and public) direct and indirect forms of intervention against online hate speech. The results indicate that a sense of empowerment regarding digital media content particularly increases direct, public interventions, such as uttering counter-speech.
- Research Article
13
- 10.1016/j.telpol.2022.102411
- Jul 16, 2022
- Telecommunications Policy
New school speech regulation as a regulatory strategy against hate speech on social media: The case of Germany's NetzDG
- Video Transcripts
- 10.48448/7ztj-pt51
- Jul 9, 2022
Hate Speech has been around in Ethiopia before social media, but with very limited reach. With the coming of social media companies that have no or little business interest to lose in low-resourced languages such as those in Ethiopia, diaspora activists that have nothing or little to lose from engaging in online hate speech, and several technical and institutional challanges, hate speech on social media slowly became mainstream in Ethiopia, tearing societies apart and eventually serving as an animating force for a genocidal war on Tigrayans. In this speech, I will briefly assess the normalization of hate speech in Ethiopia, the factors that led to this, and the role hate speech and social media played during the Tigray war, social media hate speech detection and monitoring, and what should be done going forward.
- Front Matter
- 10.1089/cyber.2023.29283.editorial
- Jun 13, 2023
- Cyberpsychology, Behavior, and Social Networking
Putting the Toothpaste Back in the Tube: Against Online Hate Speech.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.