Abstract

With the advent of Web 2.0, various platforms and tools have been developed to allow internet users to express their opinions and thoughts on diverse topics and occurrences. Nevertheless, certain users misuse these platforms by sharing hateful and offensive speeches, which has a negative impact on the mental health of internet society. Thus, the detection of offensive language has become an active area of research in the field of natural language processing. Rapidly detecting offensive language on the internet and preventing it from spreading is of great practical significance in reducing cyberbullying and self-harm behaviors. Despite the crucial importance of this task, limited work has been done in this field for nonEnglish languages such as Arabic. Therefore, in this paper, we aim to improve the results of Arabic offensive language detection without the need for laborious preprocessing or feature engineering work. To achieve this, we combine the bidirectional encoder representations from transformers (BERT) model model with a bidirectional gated recurrent unit (BiGRU) layer to further enhance the extracted context and semantic features. The experiments were conducted on the Arabic dataset provided by the SemEval 2020 Task 12. The evaluation results show the effectiveness of our model compared to the baseline and related work models by achieving a macro F1- score of 93.16%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.