Abstract

Visual-textual sentiment analysis could benefit user understanding in online social networks and enable many useful applications like user profiling and recommendation. However, it faces a set of new challenges, i.e., exacerbated noise problem caused by irrelevant or redundant information in different modalities, and the gap in joint understanding for multimodal sentiment. In this article, we propose hierarchical cross-modality interaction model for visual-textual sentiment analysis. Our model emphasises the consistency and correlation across modalities, by extracting the semantic and sentiment interactions between image and text in a hierarchical way, which could cope with the noise and joint understanding issues, respectively. Hierarchical attention mechanism is first adopted to capture the semantic interaction and purify the information in one modality with the help of the other. Then, multimodal convolutional neural network, which could fully exploit cross-modality sentiment interaction is incorporated, and better joint visual-textual representation is generated. A transfer learning method is further designed to alleviate the impact of noises in real social data. Through extensive experiments on two datasets, we show that our proposed framework greatly surpasses the state-of-the-art approaches. Specifically, the phrase-level text fragment plays an important role in interacting with image regions for joint visual-textual sentiment analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call