With the rapid progress in digitalization, hundreds of services and applications have begun to depend on human-machine communication, primarily voice-based. As technology advances, it has become increasingly imperative to develop systems that can discern sophisticated emotions from humans, like sarcasm. Despite research that has been conducted on the detection and analysis of speech signals to try and understand emotion, the detection of sarcasm, which is often not overtly expressed, is quite far from something that has received the attention it deserves in recent years. It plays a very important role in any utterance, searching to identify the psychology, mood, or even health behind it. Hence, it does give interactions with humans a more human-like flavor while raising the feeling of understanding human emotions. This article tries to identify sarcasm in speech using audio data derived from real-world, spontaneous, monolingual corpora. In this classification process, a deep learning classifier known as CNN-LSTM model is used. Results of the study showed that a robust combination between feature extraction through convolutional layers and sequential learning through LSTM layers can identify intricate speech patterns in sarcasm. Experimented and validated in terms of performance, this model shows the model to be very good at discriminating between sarcastic and no sarcastic speech, thus promoting it for more extensive use in various applications related to sentiment analysis and in human-computer interaction applications. The results show that this system provides far much closer approximations to human interaction styles, but its impact would extend into a wide area from customer services, monitoring mental health, and even AI-driven communication systems. It is a monolingual and spontaneous corpus that will open the gates for further work and development down the line and possibly extend to multilingual and more diversified speech contexts, thus fine-tuning accuracy in practical scenarios of sarcasm detection.
Read full abstract