Articles published on Hate Speech
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5301 Search results
Sort by Recency
- New
- Research Article
- 10.1038/s41598-025-30299-5
- Dec 4, 2025
- Scientific reports
- V S Nivedita + 1 more
Detecting fake discussions and aggressive speech on social platforms remains a complex task due to evolving user dynamics and the minimal overlap between normal and harmful content. Traditional detection methods lag in classifying these contents as they focus only on textual features or user behavior. This leads to limited performance specially in sparse interaction environments. To overcome these limitations a hybrid graph-based classification model is proposed in this research work which incorporates semantic, behavioral, and contextual data into a combined architecture. The proposed system models content and users as nodes, with interaction types such as replies, mentions, and shares to create a forming weighted edges based on temporal and behavioral factors. Also, a contrastive strategy is incorporated to differentiate the aligned and conflicting user-content associations. The textual representation in the comments is captured using a semantic encoder and the relational dependencies are modeled through graph attention layers which further enhanced with metadata like credibility scores and user activity. Experimental evaluations using the FakeNewsNet dataset confirm the proposed model superior performance through the attained 97.12% accuracy which is superior over conventional methods like GRU-MCAF (94.82%) and Attention-LSTM (93.57%). The proposed model also recorded the highest F1-score of 96.43%, precision of 96.72%, and recall of 96.15% which indicates the model consistent classification across labels over conventional methods.
- New
- Research Article
- 10.1007/s13278-025-01531-x
- Dec 2, 2025
- Social Network Analysis and Mining
- Basab Nath + 3 more
Few-shot and zero-shot Assamese hate speech detection: a comparative benchmark of large language models
- New
- Research Article
- 10.11591/ijict.v14i3.pp1015-1023
- Dec 1, 2025
- International Journal of Informatics and Communication Technology (IJ-ICT)
- Vincent Vincent + 1 more
The rise of social media has enabled public expression but also fueled the spread of hate speech, contributing to social tensions and potential violence. Natural language processing (NLP), particularly text classification, has become essential for detecting hate speech. This study develops a hate speech detection model on Twitter using FastText with bidirectional long short-term memory (Bi-LSTM) and explores multilingual bidirectional encoder representations from transformers (M-BERT) for handling diverse languages. Data augmentation techniques-including easy data augmentation (EDA) methods, back translation, and generative adversarial networks (GANs)-are employed to enhance classification, especially for imbalanced datasets. Results show that data augmentation significantly boosts performance. The highest F1-scores are achieved by random insertion for Indonesian (F1-score: 0.889, Accuracy: 0.879), synonym replacement for English (F1-score: 0.872, Accuracy: 0.831), and random deletion for German (F1-score: 0.853, Accuracy: 0.830) with the FastText + Bi-LSTM model. The M-BERT model performs best with random deletion for Indonesian (F1-score: 0.898, Accuracy: 0.880), random swap for English (F1 score: 0.870, Accuracy: 0.866), and random deletion for German (F1-score: 0.662, Accuracy: 0.858). These findings underscore that data augmentation effectiveness varies by language and model. This research supports efforts to mitigate hate speech’s impact on social media by advancing multilingual detection capabilities.
- New
- Research Article
- 10.21093/el-buhuth.v8i2.11703
- Dec 1, 2025
- el Buhuth: Borneo Journal of Islamic Studies
- Fitria Nofiyanti + 1 more
The purpose of this article is to analyze expressions of faith in responding to hate speech through a thematic analysis of selected hadiths that emphasize the importance of ethical discourse as a basis for action on social media. Qualitative approach was used in this research, using documentation and contextual analysis of primary hadith sources, tafsir, and related literature. The findings reveal three main points: first, the quality of an individual’s speech is strongly correlated with their level of faith as taught in the hadiths; second, the social context of pre-Islamic Arab society, which was filled with verbal aggression, underscores the urgency of the Prophet Muhammad’s prohibition of hate speech; third, true faith is manifested in polite verbal behavior, while hate speech indicates weak faith and the potential for social division. This understanding is relevant to contemporary digital communication and affirms that the internalization of ethical speech based on hadith should serve as a foundation in education, preaching, and public policy. Overall, this study highlights that maintaining ethical communication is an integral part of strengthening faith and fostering a civilized society.
- New
- Research Article
- 10.46991/bysu.c/2025.16.2.135
- Dec 1, 2025
- Bulletin of Yerevan University C: Jurisprudence
- Arpine Hovhannisyan
In recent years, the issue of the liability regime of internet intermediaries has become a central topic of discussion within the European legal and policy context. A particularly pressing question has emerged as to whether online platforms should bear legal responsibility in situations where their infrastructure is used to disseminate insults, defamation, hate speech, or other forms of content violating fundamental human rights. This issue becomes especially pronounced when individuals whose rights have been infringed address these platforms with formal requests to remove offensive or defamatory material, yet the platforms fail to act. Such situations raise the question of two-layered (or dual) liability, which involves not only the original authors of the unlawful content but also the intermediaries who enable its publication and continued accessibility. The relevance of this topic increased significantly after 2016, particularly in the context of several presidential elections and the Brexit referendum in the United Kingdom. These developments intensified public debates around the balance between freedom of expression and regulatory control over the information environment. Currently, the European Court of Human Rights has developed an almost consolidated body of case law, indicating that internet intermediaries may be held liable when they fail to take appropriate and timely measures to remove defamatory or offensive content from their platforms upon notification. This debate and its potential regulatory solutions are also highly relevant to the Armenian legal context, particularly in light of ongoing legislative reforms and the provisions of the Constitutional Court of Armenia’s decision of October 1 of the previous year. These developments are likely to play a decisive role in shaping the future regulatory framework for the liability of internet intermediaries in Armenia.
- New
- Research Article
- 10.1002/crq.70013
- Dec 1, 2025
- Conflict Resolution Quarterly
- Ananda Kumar Biswas + 2 more
ABSTRACT Bangladesh is currently advancing more rapidly in its economic and technological domains. Bangladesh has developed on secularism, equality, justice, and freedom. This study examines the ramifications of radicalization in the southwestern region. The investigation is conducted using both qualitative and quantitative methodologies. A primary random sample method was employed as the sampling procedure. This study included both primary and secondary data. A semistructured questionnaire was used throughout the field survey. One hundred and twenty survey responses, 20 persons engaged in an in‐depth interview (IDI), and four focus group discussions (FGD) served as data collection instruments. Political violence and instability are currently evident in the radicalization process in Bangladesh. The lack of voting, internal factions, petrol bomb explosions, political cases, and power struggles among political groupings has risen by 40% over the past decade. Individuals exhibit intolerance toward religious aspects and attempt to convert others to their faith; the response rate is 40%. Minority migration and assaults on minorities are poised to become extreme. Approximately 80% of individuals lack social media literacy. The prevalence of social media protection, hate speech, and responses to religious matters is escalating in southwest Bangladesh. The generational gap in radicalization is inverted; teenagers exhibit greater radicalism than individuals over 50 years old.
- New
- Research Article
- 10.1097/wnn.0000000000000408
- Dec 1, 2025
- Cognitive and behavioral neurology : official journal of the Society for Behavioral and Cognitive Neurology
- Mario F Mendez
Hate toward people groups is a significant cause of human suffering, yet we understand relatively little about its neurocognitive and neuroanatomical bases. While definitions of hate vary, most agree that it can be defined as an aversion to certain others that motivates attempts to expel them through methods ranging from physical violence to passive avoidance. The evolutionary roots of group hate are in the ingroup-outgroup distinction, in which the individual favors ingroup members over outgroup members. Earlier studies on the effects of hate speech and propaganda show that the dehumanization of outgroup members facilitates hate. Although several mechanisms have been proposed, evidence suggests that dehumanization leads to hate due to the perceived failure of outgroup members to meet a moral ideal of right and wrong. As such, hate can be understood as an innate moral sentiment that enhances themes of ingroup loyalty and purity by expelling the offending outgroup. It involves regions of the brain active in social cognition, emotion, empathy, and behavioral regulation, such as the medial prefrontal cortex, inferior frontal gyrus, anterior cingulate cortex, amygdala, insula, and temporoparietal junction. By examining these neurobiological aspects of hate and understanding them in the context of the social and cultural factors that foster hate, we can gain deeper insights into this harmful phenomenon and how best to combat it.
- New
- Research Article
- 10.1016/j.softx.2025.102431
- Dec 1, 2025
- SoftwareX
- Paloma Piot + 2 more
WATCHED: A Web AI Agent Tool for Combating Hate speech by Expanding Data
- New
- Research Article
- 10.1080/10584609.2025.2585498
- Nov 29, 2025
- Political Communication
- Emily Harmer + 1 more
ABSTRACT Previous research has shown that women politicians are often subject to more incivility, hate speech, and sexualized abuse on social media. Until now, studies tend to analyze one platform, and there has been no research to establish whether these experiences are the same across different platforms. This study deploys a two-stage design to isolate a platform effect in online incivility and abuse. We selected 18 women politicians from a range of UK political parties because they shared identical posts across three platforms – Facebook, Instagram, and Twitter. We collected all identical posts and replies to them, across each platform. The replies were content analyzed for elements of incivility and legitimate criticism but also for support and respectful responses. Multi-level binary logistic models were run on these data to assess platform effect, controlling for party and whether the original post mentioned a gendered topic. The replies were then subject to a qualitative analysis. The results showed a clear platform effect with Twitter being the source of the majority of uncivil, insulting and othering replies to women politicians. Twitter posts also received the least support and the fewest polite messages. Conversely, Instagram appeared to be less hostile. The platform had more polite interaction and a high proportion of supportive responses, accounting for well over half of all supportive responses across the sample.
- New
- Research Article
- 10.46348/car.v6i2.410
- Nov 29, 2025
- CARAKA: Jurnal Teologi Biblika dan Praktika
- Aldi Abdillah
This article analyzes the phenomenon of political buzzers in Indonesia through the lens of Matthew 28:11–15, which recounts the “Lie of the Sanhedrin.” The author begins by describing buzzers as actors who disseminate political narratives, including hoaxes and hate speech, often used to manipulate public opinion. Using a hermeneutical approach, the article explores the political background of Matthew 28:11–15 and interprets the chief priests’ act of bribing Roman soldiers to spread a false story about Jesus’ resurrection. The discussion highlights that the story of the guards at Jesus’ tomb is unique to Matthew’s Gospel and serves as a counter-apologetic response to accusations that Jesus’ body was stolen. The author interprets this as a form of resistance by the Jewish-Christian community against the religious-political authorities of the time. The article also explores the irony of power relations among Pilate, the Roman soldiers, and the Jewish leaders, showing how Matthew exposes injustice and conspiracy. As a contribution to contextual theology, the author proposes two attitudes: “responding” (critically engaging with buzzer narratives) and “unveiling” (exposing the interests behind such narratives).
- New
- Research Article
- 10.5120/ijca2025925566
- Nov 28, 2025
- International Journal of Computer Applications
- Alan Janbey
Cross-Platform NLP Framework for Detecting LGBTQIA Hate Speech: Evaluation on Reddit and Simulated Twitter Datasets
- New
- Research Article
- 10.1007/s13278-025-01551-7
- Nov 25, 2025
- Social Network Analysis and Mining
- Ehtesham Hashmi + 5 more
Abstract Text classification remains a fundamental task in natural language processing, with applications spanning sentiment analysis, spam detection, and hate speech identification. However, its performance is often limited when relying exclusively on either handcrafted linguistic features or semantic embedding representations in isolation. In real-world scenarios, text often exhibits high variability in style, structure, and context, making it challenging for single-representation approaches to capture both syntactic nuances and deeper semantic relationships. This limitation can lead to reduced robustness and generalization, particularly when models are deployed across various different tasks. This study proposes a hybrid feature fusion framework that integrates interpretable linguistic features extracted using the Linguistic Feature Toolkit with advanced semantic embeddings derived from Doc2Vec and transformer-based model. By combining syntactic structures with deep contextual representations, the approach aims to capture both surface-level and semantic nuances of textual data. The framework is evaluated on five benchmark datasets spanning three critical domains: Fake News Detection, Bloom’s Taxonomy Classification, and hate speech detection. Extensive experiments using multiple machine learning classifiers demonstrate that the fusion of linguistic and semantic features consistently outperforms single-feature baselines across all domains. The Bidirectional Encoder Representations from Transformer linguistic feature fusion approach achieved accuracies of up to 81% for Fake News Detection, 67% for Bloom’s Taxonomy classification, and 72% for HSD, with corresponding improvements in precision, recall, and F1-score. These findings confirm the effectiveness of integrating linguistic interpretability with deep semantic modeling, offering a robust and domain-agnostic solution for advancing text classification performance. While the study does not perform explicit cross-domain transfer experiments, it provides a comprehensive multi-domain benchmarking framework and quantifies domain shift across diverse datasets.
- New
- Research Article
- 10.1177/00111287251384670
- Nov 24, 2025
- Crime & Delinquency
- Matteo Vergani + 6 more
This article addresses the proliferation of definitions and approaches used to characterize the hate element in behaviors motivated by hate, including hate crimes, hate speech, and behaviors motivated by prejudice against specific identities (e.g., homophobia, anti-Semitism, Islamophobia), and investigates whether these definitions cluster into distinct types. Using machine learning, we clustered 423 definitions from academic and gray literature in five languages between 1990 and 2021, based on 16 theoretically derived categories. The resulting typology captures the diversity of definitions from ten countries in North America, Europe, and Oceania, providing a comprehensive framework for understanding how the hate element is conceptualized in these contexts. The findings offer a basis for future research and may help inform policy responses to hate-motivated behaviors.
- New
- Research Article
- 10.55942/pssj.v5i9.1066
- Nov 24, 2025
- Priviet Social Sciences Journal
- Irwanto Irwanto
Social communication has changed entirely because of the advancement of digital technology, especially in the context of religious life. With an emphasis on initiatives to foster peaceful discourse in the digital age, this study explores social communication from the perspectives of religious moderation and science. Using a literature review approach, this study examined scholarly works, research reports, and current scientific sources on social communication tactics, digital literacy, and religious moderation. The results show that religiously moderated communication can be a useful tool for reducing polarization, lowering the risk of online extremism, and promoting tolerance and respect for diversity. From a scientific standpoint, digital literacy plays a vital role in strengthening the public's critical ability toward the flow of information, allowing them to filter extremist propaganda, hate speech, and hoaxes. Additionally, communication tactics that combine scientific methods with religious principles can make messages more inclusive and logical, while also enhancing their validity. To create a healthy, moderate, and long-lasting social communication ecosystem, this study also highlights the significance of multi-stakeholder collaboration among religious leaders, academics, educators, digital media managers, and the government. Therefore, this study claims that integrating scientific and religious moderation is essential for fostering peaceful socio-religious discourse in the digital age.
- New
- Research Article
- 10.5539/jsd.v19n1p1
- Nov 24, 2025
- Journal of Sustainable Development
- Anab Chrysogonus + 2 more
The 1992 Constitution of Ghana guarantees the right to education for all children. The Convention on the Rights of the Child, which Ghana ratified in 1990, further strengthens this. However, Fulbe children in Ghana face several barriers that limit their access to education. In this study we employed a qualitative approach and multi-case study design to explore inclusive education models, legal and policy frameworks and the views of stakeholders on the barriers and strategies for expanding access to quality education for Fulbe children in Ghana. The methods used included critical reviews of secondary sources of information, focus group discussions and key informant interviews. A key conclusion from the study is that Fulbe children remain excluded from the educational system in Ghana due to weak implementation of inclusive education laws and policies; a paltry 0.1% of the annual education budget is spent on inclusive education. High levels of poverty among Fulbe households, negative Fulbe gendered norms, stereotypes, and discrimination against the Fulbe further contribute to their exclusion. Key recommendations include criminalising hate speech against the Fulbe in Ghana, including specific Fulbe education objectives and indicators in the next Education Strategic Plan of Ghana; reviewing the Inclusive Education Policy of Ghana; establishing a Fulbe children and youth desk within the Complementary Education Agency in Ghana; and pooling Metropolitan, Municipal and District Assemblies resources to establish community learning centres and mobile schools to improve access to quality complementary education for Fulbe children in Ghana.
- New
- Research Article
- 10.1145/3778030
- Nov 24, 2025
- ACM Transactions on Multimedia Computing, Communications, and Applications
- Baoping Liu + 3 more
Deepfake techniques can now generate multimodal content comprising video and audio tracks. Compared with unimodal Deepfake images, videos or audio, multimodal Deepfake content is more deceptive and easily leads to the dissemination of hate speech, incitement to violence, and disinformation. Therefore, the detection of multimodal Deepfake has attracted much research attention recently. While cross-attention shows the promising capacity for modelling the complicated dependencies between audio and video in multimodal Deepfake detection, it fails to learn accurate cross-modal patterns if audio and video are misaligned in the temporal dimension. Besides, most current multimodal Deepfake detectors only provide a binary classification label, lacking fine-grained localization to identify significant forgery in multiple dimensions (e.g. modal, time and spatial dimension). In this study, we propose a novel multimodal Deepfake detection framework named ForgeFinder, which goes beyond binary label prediction and achieves multi-grained forgery localization in modal and spatiotemporal dimensions. ForgeFinder incorporates both intra-modal and cross-modal inconsistencies to classify multimodal input. In detail, we adopt serial spatiotemporal self-attention (SSTSA) in the Intra-modal Inconsistency Explorer (Intra-MIE), which allows the temporal self-attention to run in the original dimension without bringing unacceptable computational complexity. In the Cross-modal Inconsistency Explorer (Cross-MIE), we propose the offset-shifted cross-attention (OSCA) by introducing a time offset term to the conventional cross-attention to mitigate the inaccuracy of cross-modal dependencies modelling brought by the temporal misalignment. By adopting the outputs of Intra-MIE for unimodal tasks, we identify the likelihood of modals being manipulated and localize tampered modals. At the same time, the attention weights of SSTSA can be visualized to pinpoint the temporal and spatial distribution of Deepfake manipulation. Therefore, for a single audio-video input sample, ForgeFinder not only tells the authenticity of the overall input but also localizes the modal, temporal sequence and spatial coordinates of significant forgery, significantly contributing to more comprehensive forensics analysis. The results of extensive experiments indicate that ForgeFinder achieves state-of-the-art detection performance as well as accurate forgery localization in modal and spatiotemporal dimensions. Furthermore, experiments on content generated by diffusion models (DMs) show that our model also effectively recognizes DM-generated content.
- New
- Research Article
- 10.30872/calls.v11i0.23047
- Nov 24, 2025
- CaLLs (Journal of Culture, Arts, Literature, and Linguistics)
- Ali Kusno
This research utilizes Generative Artificial Intelligence (AI) as an analytical tool for digital discourse. Its main objective is to identify the rhetorical strategies of separatism and ethnic hate speech against the Javanese people disseminated on social media, particularly on Instagram. Using a qualitative approach and Fairclough's Critical Discourse Analysis (CDA) framework, this study examines how texts containing discriminatory sentiments, emotional sentence constructions, and exclusive symbols are produced and spread in the digital space. In-depth analysis reveals that user interaction patterns accelerate the dissemination of separatist ideas. From a forensic linguistics perspective, these findings underscore the damaging impact of digital rhetoric on social cohesion, with the potential to erode brotherhood and trigger the fragmentation of national identity. This paper argues that hate narratives are not merely linguistic constructions but also social actions that require a multidisciplinary response. The study concludes with specific recommendations, such as the development of counter-narratives based on linguistic evidence, improved digital literacy, and the formulation of public policies that are pro-national unity. This research provides a significant contribution to maintaining national stability amid information disruption.
- New
- Research Article
- 10.26562/ijiris.2025.v1110.11
- Nov 22, 2025
- International Journal of Innovative Research in Information Security
- Nithya Kalyani
In the digital age, cyberbullying has become a widespread and dangerous problem that causes serious emotional pain and psychological harm, especially to teenagers and young adults. In order to successfully detect and minimize instances of cyberbullying on multiple online communication channels, an automated, reliable, and accurate method is required. Current cybercrime tracking techniques frequently rely on systems like SVM and Naive Bayes, which perform poorly with large, noisy datasets, or they simply do a binary classification (crime or not a crime) and do not work on the sort of crime. Furthermore, existing systems frequently rely on ineffective human reporting for intervention and lack real-time analysis.In order to get over these restrictions, this study suggests a sophisticated cyberbullying detection system that makes use of Long Short-Term Memory (LSTM) networks, a kind of Recurrent Neural Network (RNN) that is ideal for analysing sequential text input. The main goal is to create a reliable model that can precisely recognize and categorize instances of hate speech and cyberbullying in text. Importantly, the idea incorporates an automated user blocking mechanism and a unique reputation score, which sets it apart from just predictive methods. When wrongdoing is discovered, a user's dynamic reputation score is continuously decreased. The technology initiates an instantaneous, automated block from the platform when this score drops below a predetermined threshold.
- New
- Research Article
- 10.1038/s41598-025-25879-4
- Nov 21, 2025
- Scientific Reports
- Loay Hatem + 3 more
Online social networks are currently the most widely utilized interactive media for interpersonal communication, emotional expression, and information sharing. Despite the helpful and fascinating content, unfortunately, inappropriate or abusive content, such as toxicity, hate speech, and insults, can occasionally be shared on social networks. Any kind of online abuse, including but not limited to cyberbullying, discrimination, abusive language, profanity, flames, hate speech, and harassment, is considered toxic content. While there has been little effort in the Arabic language, the majority of toxicity detection attempts have focused on English text. In this work, we constructed a standard Arabic dataset that can be used for toxicity and abuse detection on OSNs. The proposed dataset has been annotated by the experts of five native and fluent Arabic speakers and linguists. To evaluate the performance of our dataset, we conducted a series of experiments by using sixteen machine learning algorithms, the FastText model, and seven transfer learning architectures to compare the performance. Furthermore, we used four word embedding techniques (bag of words (BOW), term frequency–inverse document frequency (TF-IDF), FASTTEXT, and bidirectional encoder representations from transformers (BERT)). Our experimental results demonstrated that the fine-tuned MARBERTv2 model with BERT embedding outperforms the other models, achieving an F1-score of 92.43% and an accuracy of 92.21%. Notably, this study highlights the importance of addressing toxicity on social media platforms, considering diverse languages and cultures. This signifies a significant breakthrough in the classification of toxic tweets in Arabic.
- New
- Research Article
- 10.1007/s11135-025-02463-6
- Nov 19, 2025
- Quality & Quantity
- Figen Eğin + 2 more
AI-powered detection of hate speech against refugees in Turkish social media