Abstract

Sentiment is a high-level abstraction, and it is a challenging task to accurately extract sentimental features from visual contents due to the “affective gap”. Previous works focus on extracting more concrete sentimental features of individual objects by introducing saliency detection or instance segmentation into their models, neglecting the interaction among objects. Inspired by the observation that interaction among objects can impact the sentiment of images, we propose the Sentiment Interaction Distillation (SID) Network, which utilizes object sentimental interaction to guide feature learning. Specifically, we first utilize a panoptic segmentation method to obtain objects in images; then, we propose a sentiment-related edge generation method and employ Graph Convolution Network to aggregate and propagate object relation representation. In addition, we propose a knowledge distillation framework to utilize interaction information guiding global context feature learning, which can avoid noisy features introduced by error propagation and a varying number of objects. Experimental results show that our method outperforms the state-of-the-art algorithm, e.g., about 1.2% improvement on the Flickr dataset and 1.7% on the most challenging subset of Twitter I. It is demonstrated that the reasonable use of interaction features can improve the performance of sentiment analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call