Abstract
With the rapid progress in digitalization, hundreds of services and applications have begun to depend on human-machine communication, primarily voice-based. As technology advances, it has become increasingly imperative to develop systems that can discern sophisticated emotions from humans, like sarcasm. Despite research that has been conducted on the detection and analysis of speech signals to try and understand emotion, the detection of sarcasm, which is often not overtly expressed, is quite far from something that has received the attention it deserves in recent years. It plays a very important role in any utterance, searching to identify the psychology, mood, or even health behind it. Hence, it does give interactions with humans a more human-like flavor while raising the feeling of understanding human emotions. This article tries to identify sarcasm in speech using audio data derived from real-world, spontaneous, monolingual corpora. In this classification process, a deep learning classifier known as CNN-LSTM model is used. Results of the study showed that a robust combination between feature extraction through convolutional layers and sequential learning through LSTM layers can identify intricate speech patterns in sarcasm. Experimented and validated in terms of performance, this model shows the model to be very good at discriminating between sarcastic and no sarcastic speech, thus promoting it for more extensive use in various applications related to sentiment analysis and in human-computer interaction applications. The results show that this system provides far much closer approximations to human interaction styles, but its impact would extend into a wide area from customer services, monitoring mental health, and even AI-driven communication systems. It is a monolingual and spontaneous corpus that will open the gates for further work and development down the line and possibly extend to multilingual and more diversified speech contexts, thus fine-tuning accuracy in practical scenarios of sarcasm detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.