Abstract

Sarcasm is a perplexing form of human expression that presents distinct challenges in understanding. The problem of sarcasm detection has centered around analyzing individual utterances in isolation which may not provide a comprehensive understanding of the speaker’s sarcastic intent. Our work addresses this problem by exploring and understanding the specific contextual cues that contribute to sarcasm. In this paper, we propose an enhanced approach for sarcasm detection using contextual features. Our methodology involves employing pre-trained transformer models, RoBERTa and DistilBERT, and fine-tuning them on two datasets: the News Headlines and the Mustard datasets. Incorporating contextual information, the proposed approach yielded the best performance, achieving an impressive F1 score of 99% on News Headlines dataset and 90% on Mustard dataset. Moreover, we experimented summarizing the context into a concise short sentence. This enhancement reduced training time by 35.5% of the original time. We further validated the model trained on the News headlines dataset against the Reddit dataset, which resulted in 49% F1 score without context data. However, with the inclusion of context data, the F1 score surged to 75%. Proposed approach enhances the understanding of sarcasm in different contextual settings, enabling more accurate sentiment analysis and better decision-making in various applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.