Abstract

Image sentiment analysis is a hot research topic in the field of computer vision. However, two key issues need to be addressed. First, high-quality training samples are scarce. There are numerous ambiguous images in the original datasets owing to diverse subjective cognitions from different annotators. Second, the cross-modal sentimental semantics among heterogeneous image features has not been fully explored. To alleviate these problems, we propose a novel model called multidimensional extra evidence mining (ME 2 M) for image sentiment analysis, it involves sample-refinement and cross-modal sentimental semantics mining. A new soft voting-based sample-refinement strategy is designed to address the former problem, whereas the state-of-the-art discriminant correlation analysis (DCA) model is used to completely mine the cross-modal sentimental semantics among diverse image features. Image sentiment analysis is conducted based on the cross-modal sentimental semantics and a general classifier. The experimental results verify that the ME 2 M model is effective and robust and that it outperforms the most competitive baselines on two well-known datasets. Furthermore, it is versatile owing to its flexible structure.

Highlights

  • With the rapid development of social media, we prefer to upload videos, audios, images, and texts to blogs to express our personal emotions

  • EXPERIMENTAL RESULTS We present our experimental results in a systematic manner: the ME2M model that only uses the cross-modal sentimental semantics is evaluated in the first subsection

  • 1) CLASSIFICATION RESULTS OF THE ME2M MODEL WITH CSS As described in Section III (D), cross-modal sentimental semantics mining is a key component of the ME2M model

Read more

Summary

Introduction

With the rapid development of social media, we prefer to upload videos, audios, images, and texts to blogs (or microblogs or WeChat) to express our personal emotions. Apart from texts and audios, images contain much valuable sentimental semantics owing to plentiful visual information, which can be utilized to complete many significant things. Timely psychological intervention is feasible for the depressed persons if we can accurately capture their emotions based on their social media information. Preferences predicted by a model to complete popularity predictions. According to these predictions, people will make smarter decisions than before. We propose a novel model called ME2M for image sentiment analysis. The ME2M model attempts to use many valuable evidences including ‘‘new image features,’’ ‘‘refined samples,’’ and ‘‘cross-modal sentimental semantics’’ to build an effective classification model. And empirically, the main contributions of this paper can be summarized as follows:

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call