Abstract

Multimodal emotion analysis is an important endeavor in human–computer interaction research, as it enables the accurate identification of an individual’s emotional state by simultaneously analyzing text, video, and sound features. Although current emotion recognition algorithms have performed well using multimodal fusion strategies, two key challenges remain. The first challenge is the efficient extraction of modality-invariant and modality-specific features prior to fusion, which requires deep feature interactions between the different modalities. The second challenge concerns the ability to distinguish high-level semantic relations between modality features. To address these issues, we propose a new modality-binding learning framework and redesign the internal structure of the transformer model. Our proposed modality binding learning model addresses the first challenge by incorporating bimodal and trimodal binding mechanisms. These mechanisms handle modality-specific and modality-invariant features, respectively, and facilitate cross-modality interactions. Furthermore, we enhance feature interactions by introducing fine-grained convolution modules in the feedforward and attention layers of the transformer structure. To address the second issue, we introduce CLS and PE feature vectors for modality-invariant and modality-specific features, respectively. We use similarity loss and dissimilarity loss to support model convergence. Experiments on the widely used MOSI and MOSEI datasets show that our proposed method outperforms state-of-the-art multimodal sentiment classification approaches, confirming its effectiveness and superiority. The source code can be found at https://github.com/JackAILab/TMBL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call