Abstract

Sarcasm prediction is a text analysis task that aims to identify sarcastic and non-sarcastic statements in text. Sarcasm is a figure of speech that uses opposite or contradictory language to express a certain meaning or idea. Sarcasm is usually cryptic, vague, and suggestive, which makes sarcasm prediction a challenging task. In sarcasm prediction projects, techniques of natural language processing are usually leveraged to analyze and classify the text. The main challenge of this task lies in the fact that sarcasm usually has multiple manifestations and needs to consider the contextual and semantic information of the text. The prediction of sarcasm holds significant application value in natural language processing, such as social media analysis, public opinion monitoring, sentiment analysis and so on. In this paper, by controlling variables, the influence of adding the long short-term memory (LSTM) layer and changing the grid structure of the model on the accuracy of prediction results is explored. Moreover, accuracy of the LSTM prediction performance is compared with that of the bidirectional encoder representations from Transformers (BERT) model. At the same time, this paper analyzed and discussed the phenomenon that adding the number of LSTM model layers could not obtain higher prediction accuracy, and the accuracy gap of prediction results between LSTM model and BERT model, and finally obtained relevant conclusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call