Abstract

In recent years, unethical behavior in the cyber-environment has been revealed. The presence of offensive language on social media platforms and automatic detection of such language is becoming a major challenge in modern society. The complexity of natural language constructs makes this task even more challenging. Until now, most of the research has focused on resource-rich languages like English. Roman Urdu and Urdu are two scripts of writing the Urdu language on social media. The Roman script uses the English language characters while the Urdu script uses Urdu language characters. Urdu and Hindi languages are similar with the only difference in their writing script but the Roman scripts of both languages are similar. This study is about the detection of offensive language from the user's comments presented in a resource-poor language Urdu. We propose the first offensive dataset of Urdu containing user-generated comments from social media. We use individual and combined n-grams techniques to extract features at character-level and word-level. We apply seventeen classifiers from seven machine learning techniques to detect offensive language from both Urdu and Roman Urdu text comments. Experiments show that the regression-based models using character n-grams show superior performance to process the Urdu language. Character-level tri-gram outperforms the other word and character n-grams. LogitBoost and SimpleLogistic outperform the other models and achieve 99.2% and 95.9% values of F-measure on Roman Urdu and Urdu datasets respectively. Our designed dataset is publically available on GitHub for future research.

Highlights

  • Cyberbullying using offensive language on the Internet has become a major problem among all age groups

  • This paper investigates the performance of different machine learning techniques for Urdu and Roman Urdu text

  • Because the datasets are not divided into training, testing or validation sets, we used ten-fold cross-validation to train and test the machine learning models [17], [36]

Read more

Summary

Introduction

Cyberbullying using offensive language on the Internet has become a major problem among all age groups. Automatic detection of offensive language from social media applications, websites and blogs is a difficult but an important task. Social media platforms (like Twitter, YouTube, and Facebook) provide a common place to communicate and share user opinion about various topics like news, videos, and personalities. Social media applications and websites provide a central point of communication among the people of the world. Users usually prefer and feel comfortable to use their native language than English to write their opinion, feedback or comments about online products, videos, articles [3]. It is important to design an automatic system to detect, stop or ban offensive language before it is published online

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call