Abstract

AbstractLearning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high‐order semantic features than other modalities. In this article, we propose a sparse‐ and cross‐attention (SCANET) framework which has asymmetric architecture to improve performance of multimodal representation and fusion. Specifically, in the unimodal representation stage, we use sparse attention to improve the representation efficiency of two modalities and reduce the low‐order redundant features of audio and visual modalities. In the multimodal fusion stage, we design an innovative asymmetric fusion module, which utilizes audio and visual modality information matrix as weights to strengthen the target text modality. We also introduce contrastive learning to effectively enhance complementary features between modalities. We apply SCANET on the CMU‐MOSI and CMU‐MOSEI datasets, and experimental results show that our proposed method achieves state‐of‐the‐art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call