Abstract

The rapid identification of offensive language in social media is of great significance for preventing viral spread and reducing the spread of malicious information, such as cyberbullying and content related to self-harm. In existing research, the public datasets of offensive language are small; the label quality is uneven; and the performance of the pre-trained models is not satisfactory. To overcome these problems, we proposed a multi-semantic fusion model based on data augmentation (MSF). Data augmentation was carried out by back translation so that it reduced the impact of too-small datasets on performance. At the same time, we used a novel fusion mechanism that combines word-level semantic features and n-grams character features. The experimental results on the two datasets showed that the model proposed in this study can effectively extract the semantic information of offensive language and achieve state-of-the-art performance on both datasets.

Highlights

  • Offensive language is usually defined as hurtful, derogatory, or obscene comments made by one person to another

  • To solve the above problems, we proposed a multi-semantic fusion network based on data augmentation (MSF)

  • We focused on offensive-language detection through data augmentation, n-grams character features, and semantic fusion

Read more

Summary

Introduction

Offensive language is usually defined as hurtful, derogatory, or obscene comments made by one person to another. The deep-learning methods are based on the word-embedding representation obtained from large-scale expected training and use a neural network structure to extract and merge semantic features. In addition to the CNN method, Badjatiya et al [5] used a Long Short-Term Memory (LSTM) network and FastText [6] methods to detect offensive language These methods’ performance is better than the traditional machine-learning methods. Traditional deep learning uses static word embedding, which cannot solve the polysemy problem It generates a significant reduction in the performance of the classifier. Self-supervised learning to obtain pre-trained models unrelated to specific tasks from large-scale data has been a great success in the field of natural language processing. (1) For small-scale offensive-language detection tasks, we adopted the back-translation method to enhance the data so that the model can obtain richer semantic information.

Related Work
Methodology
Data Augmentation
Deep Semantics Module
Character-Capture Module
Interactive Fusion Mechanism
Model Training
Experimental Settings
Comparison with Baselines
Ablation Experiment
The Influence of the N-Grams
Error Analysis
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call