Abstract

Irony and Sarcasm Detection (ISD) is a crucial task for many NLP applications, especially sentiment and opinion mining. It is also considered a challenging task even for humans. Several studies have focused on employing Deep Learning (DL) approaches, including building Deep Neural Networks (DNN) to detect irony and sarcasm content. However, most of them concentrated on detecting sarcasm in English rather than Arabic content. Especially studies concerning deep neural networks, including convolutional neural networks (CNN) and recurrent neural network (RNN) architectures. This paper investigates several deep learning approaches, including DNNs and fine-tuned pretrained transformer-based language models, for identifying Arabic sarcastic tweets. In addition, it presents a comprehensive evaluation of the impact of data preprocessing techniques and several pretrained word embedding models on the performance of the proposed deep models. Two shared tasks' datasets on Arabic sarcasm detection are used to develop, fine-tune, and evaluate the different techniques and methods presented in this paper. Results on the first dataset showed that fine-tuned pretrained transformer-based language model outperformed the developed DNNs. The proposed DNN models obtained comparable performance on the second dataset to the fine-tuned models. Results also proved the necessity of applying preprocessing techniques with the various Deep Learning approaches for better detection performance of these models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.