Abstract

Human communication includes rich emotional content, thus the development of multimodal emotion recognition plays an important role in communication between humans and computers. Because of the complex emotional characteristics of a speaker, emotional recognition remains a challenge, particularly in capturing emotional cues across a variety of modalities, such as speech, facial expressions, and language. Audio and visual cues are particularly vital for a human observer in understanding emotions. However, most previous work on emotion recognition has been based solely on linguistic information, which can overlook various forms of nonverbal information. In this paper, we present a new multimodal emotion recognition approach that improves the BERT model for emotion recognition by combining it with heterogeneous features based on language, audio, and visual modalities. Specifically, we improve the BERT model due to the heterogeneous features of the audio and visual modalities. We introduce the Self-Multi-Attention Fusion module, Multi-Attention fusion module, and Video Fusion module, which are attention based multimodal fusion mechanisms using the recently proposed transformer architecture. We explore the optimal ways to combine fine-grained representations of audio and visual features into a common embedding while combining a pre-trained BERT model with modalities for fine-tuning. In our experiment, we evaluate the commonly used CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets for multimodal sentiment analysis. Ablation analysis indicates that the audio and visual components make a significant contribution to the recognition results, suggesting that these modalities contain highly complementary information for sentiment analysis based on video input. Our method shows that we achieve state-of-the-art performance on the CMU-MOSI, CMU-MOSEI, and IEMOCAP dataset.

Highlights

  • An effective communication among humans requires intellectual exchange but of sharing contextual emotions

  • We describe the process of a transformer that can effectively fuse audio and image heterogeneous feature information

  • 1) CMU-MOSI CMU-MOSI consists of 2,199 short monologue video clips, examples of YouTube movie reviews for multimodal emotions and emotion recognition

Read more

Summary

INTRODUCTION

An effective communication among humans requires intellectual exchange but of sharing contextual emotions. For deep learning based emotion recognition, [13]–[16] utilized CNN to extract facial features salient to expressed emotions. Another important feature for classifying emotions is the textual content of speech. The effectiveness of these unimodal feature based methods was found to be insufficient to infer the speaker’s sentiment as much of salient emotional features are expressed simultaneously via different modalities [31]. Heterogeneous Features Unification(HFU-BERT), integrates BERT into our architecture to effectively combine heterogeneous features extracted from both handcrafted and deep learning based methods.

RELATED WORK
VISUAL FEATURES
TEXT PREPROCESSING
MULTI-ATTENTION FUSION
EXPERIMENTS
RESULTS AND DISCUSSION
VIII. CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.