Abstract

Sharing our feelings through content with images and short videos is one main way of expression on social networks. Visual content can affect people’s emotions, which makes the task of analyzing the sentimental information of visual content more and more concerned. Most of the current methods focus on how to improve the local emotional representations to get better performance of sentiment analysis and ignore the problem of how to perceive objects of different scales and different emotional intensity in complex scenes. In this paper, based on the alterable scale and multi-level local regional emotional affinity analysis under the global perspective, we propose a multi-level context pyramid network (MCPNet) for visual sentiment analysis by combining local and global representations to improve the classification performance. Firstly, Resnet101 is employed as backbone to obtain multi-level emotional representation representing different degrees of semantic information and detailed information. Next, the multi-scale adaptive context modules (MACM) are proposed to learn the sentiment correlation degree of different regions for different scale in the image, and to extract the multi-scale context features for each level deep representation. Finally, different levels of context features are combined to obtain the multi-cue sentimental feature for image sentiment classification. Extensive experimental results on seven commonly used visual sentiment datasets illustrate that our method outperforms the state-of-the-art methods, especially the accuracy on the FI dataset exceeds 90%.

Highlights

  • Studies have shown that image sentiment affect visual perception [1]

  • We propose two attributes for visual sentiment analysis model: multiIn this paper, and we propose attributes for visual sentiment analysis model: amultiscale perception differenttwo levels of emotional representation

  • A novel multi-level context pyramid network composed of multi-scale adaptive context modules is proposed to learn the sentiment correlation degree of different regions for different scale in the image

Read more

Summary

Introduction

Studies have shown that image sentiment affect visual perception [1]. Compared with the non-emotional stimulus content in the image, the affective content attracts the attention of the viewer more strongly, and the viewer has a more detailed understanding of the affective stimulus content [2]. These complex objects needobjects more need mo (h),ability have a to strong ability to express [13] These complex abstract high-level semantic features to describe their emotional information. Images (e–h) represent different levels of semantic content from simple to complex. Multi-scale, adaptive context modules combined with different levels of features are presented to capture more different cues to improve model performance. The contributions of this paper can be highlighted as follows: Adaptive context framework is introduced for the first time in the image sentiment analysis task This method can learn the correlation degree of different regions in the image by combining different scale representations, which is helpful to improve the ability of the model to understand complex scenarios. The experiment proves the advancement of our method, and the visualization results show that our method can effectively identify the small semantic objects related to emotional expression in complex scenarios

Related Work
CNN with Additional Information
Region-Based CNN
Context-Based CNN
Methodology
Proposed Multi-Level Context Pyramid Network
Cross-Layer and Multi-Layer Feature Fusion Strategies
Some examplesfrom from the the datasets
Implementation Details
Hand-Crafted Features
Features Based on CNN
Choice of scale ss
Effective
Comparisons with State-of-the-Art Methods
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.