Abstract

Nowadays, people are accustomed to posting images and associated text for expressing their emotions on social networks. Accordingly, multimodal sentiment analysis has drawn increasingly more attention. Most of the existing image-text multimodal sentiment analysis methods simply predict the sentiment polarity. However, the same sentiment polarity may correspond to quite different emotions, such as happiness vs. excitement and disgust vs. sadness. Therefore, sentiment polarity is ambiguous and may not convey the accurate emotions that people want to express. Psychological research has shown that objects and words are emotional stimuli and that semantic concepts can affect the role of stimuli. Inspired by this observation, this paper presents a new MUlti-Level SEmantic Reasoning network (MULSER) for fine-grained image-text multimodal emotion classification, which not only investigates the semantic relationship among objects and words respectively, but also explores the semantic relationship between regional objects and global concepts. For image modality, we first build graphs to extract objects and global representation, and employ a graph attention module to perform bilevel semantic reasoning. Then, a joint visual graph is built to learn the regional-global semantic relations. For text modality, we build a word graph and further apply graph attention to reinforce the interdependencies among words in a sentence. Finally, a cross-modal attention fusion module is proposed to fuse semantic-enhanced visual and textual features, based on which informative multimodal representations are obtained for fine-grained emotion classification. The experimental results on public datasets demonstrate the superiority of the proposed model over the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call