Abstract

With the rapid growth of social media culture, the use of offensive or hateful language has surged, which necessitates the development of effective abusive language detection models for online platforms. This paper focuses on developing a multi-class classification model to identify different types of offensive language. The input data is taken in the form of labeled tweets and is classified into offensive language detection, offensive language categorization, and offensive language target identification. The data undergoes pre-processing, which removes NaN value and punctuation, as well as performs tokenization followed by the generation of a word cloud to assess data quality. Further, the tf-idf technique is used for the selection of features. In the case of classifiers, multiple deep learning techniques, namely, bidirectional gated recurrent unit, multi-dense long short-term memory, bidirectional long short-term memory, gated recurrent unit, and long short-term memory, are applied where it has been found that all the models, except long short-term memory, achieved a high accuracy of 99.9 % for offensive language target identification. Bidirectional LSTM and multi-dense LSTM obtained the lowest loss and RMSE values of 0.01 and 0.1, respectively. This research provides valuable insights and contributes to the development of effective abusive language detection methods to promote a safe and respectful online environment. The insights gained can aid platform administrators in efficiently moderating content and taking appropriate actions against offensive language.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call