Abstract

In human–computer interaction (HCI) and especially speech signal processing, emotion recognition is one of the most important and challenging tasks due to multi-modality and limited data availability. Nowadays, an intelligent system is required for real-world applications to efficiently process and understand the speaker's emotional state and to enhance the analytical abilities to assist communication by a human-machine interface (HMI). Designing a reliable and robust Multimodal Speech Emotion Recognition (MSER) to efficiently recognize emotions through multi-modality such as speech and text is necessary. This paper propose a novel MSER model with a deep feature fusion technique using a multi-headed cross-attention mechanism. The proposed model utilize audio and text cues to predict the emotion label accordingly. Our proposed model process the raw speech signal and text by CNN and feeds to corresponding encoders for discriminative and semantic feature extractions. The cross-attention mechanism is applied to both features to enhance the interaction between text and audio cues by crossway to extract the most relevant information for emotion recognition. Finally, combining the region-wise weights from both encoders enables interaction among different layers and paths by the proposed deep feature fusion scheme. The authors evaluate the proposed system using the IEMOCAP and MELD datasets and conduct extensive experiments that obtain state-of-the-art (SOTA) results and show a 4.5% improved recognition rate, respectively. Our model secured a significant improvement over SOTA methods, which shows the robustness and effectiveness of the proposed MSER model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call