Abstract

Offensive language identification (OLI) in user generated text is automatic detection of any profanity, insult, obscenity, racism or vulgarity that degrades an individual or a group. It is helpful for hate speech detection, flame detection and cyber bullying. Due to immense growth of accessibility to social media, OLI helps to avoid abuse and hurts. In this paper, we present deep and traditional machine learning approaches for OLI. In deep learning approach, we have used bi-directional LSTM with different attention mechanisms to build the models and in traditional machine learning, TF-IDF weighting schemes with classifiers namely Multinomial Naive Bayes and Support Vector Machines with Stochastic Gradient Descent optimizer are used for model building. The approaches are evaluated on the OffensEval@SemEval2019 dataset and our team SSN_NLP submitted runs for three tasks of OffensEval shared task. The best runs of SSN_NLP obtained the F1 scores as 0.53, 0.48, 0.3 and the accuracies as 0.63, 0.84 and 0.42 for the tasks A, B and C respectively. Our approaches improved the base line F1 scores by 12%, 26% and 14% for Task A, B and C respectively.

Highlights

  • Offensive language identification (OLI) is a process of detecting offensive language classes (Razavi et al, 2010) such as slurs, homophobia, profanity, extremism, insult, disguise, obscenity, racism or vulgarity that hurts or degrades an individual or a group from user-generated text like social media postings

  • Description Deep learning with Normed Bahdanau attention Deep learning with Scaled Luong attention Deep learning with Normed Bahdanau attention Deep learning with Scaled Luong attention Traditional Machine Learning with Multinomial Naive Bayes Deep learning with Normed Bahdanau attention Deep learning with Scaled Luong attention Traditional Machine Learning with Support Vector Machine and Stochastic Gradient Descent optimizer

  • We have chosen the classifiers namely Multinomial Naive Bayes (MNB) and Support Vector Machine (SVM) with Stochastic Gradient Descent optimizer to build the models for Task B and Task C respectively

Read more

Summary

Introduction

Offensive language identification (OLI) is a process of detecting offensive language classes (Razavi et al, 2010) such as slurs, homophobia, profanity, extremism, insult, disguise, obscenity, racism or vulgarity that hurts or degrades an individual or a group from user-generated text like social media postings. Several research work have been reported to identify the offensive languages using social media content Several workshops such as TA-COS1, TRAC2 (Kumar et al, 2018a), Abusive Language Online and GermEval (Wiegand et al, 2018) have been organized recently in this research area. In this line, OffensEval@SemEval2019 (Zampieri et al, 2019b) shared task focuses on identification and categorization of offensive language in social media. OffensEval@SemEval2019 (Zampieri et al, 2019b) shared task focuses on identification and categorization of offensive language in social media It focuses on three subtasks namely offensive language detection, categorization of offensive language and offensive language target identification. Our team SSN NLP participated in all the three subtasks

Related Work
Data and Methodology
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call