Abstract
Social media posts often contain a mixture of images and texts. This paper proposes an affective visual descriptor and an integrated visual-textual classification method for sentiment analysis in social media. Firstly a set of affective visual features is explored based on the theory of psychology and art. Secondly, a structured forest is proposed to generate bag of affective words (BoAW) from the joint distribution of ANP. The generated BoAW provides basic “visual cues” for sentiment analysis. Then a set of sentiment part (SSP) feature is introduced to integrate the visual and textual descriptors on multiple statistic manifolds. Multi-scale sentiment classification is finally applied through metric learning on the manifold kernels. In the proposed method, the re-trained class-activation map (CAM) on ILSVRC 2014 is applied and re-trained on an Adjective-Noun-Pair (ANP) labelled affective visual data set. The global average pooling (GAP) layer of CAM is used for discriminative localization, and the fully-connected layer is able to generate objective visual descriptors. 300 tweets with mixed images and texts are manually labelled and evaluated. The proposed structured forest is evaluated on ANP labelled image data set. Promising experimental results have been obtained, which shows the effectiveness of the proposed method for sentiment analysis on social media posts.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have