Abstract
Affective image classification, which aims to classify images according to their affective characteristics of inducing human emotions, has drawn increasing research attentions in the multimedia community. Although many features have been attempted, the semantic gap between low-level visual features and high-level emotional semantics, however, remains a major challenge. In this paper, we propose an affective image classification algorithm by jointly using the visual features extracted under the guidance of the art theory and semantic image annotations, such as the categories of objects and scenes, generated by a pre-trained deep convolutional neural network. This algorithm has been evaluated against three state-of-the-art approaches on three benchmark image datasets. Our results indicate that combining interpretable aesthetic features and semantic annotations can better characterize the emotional semantics and the proposed algorithm is able to produce more accurate affective image classification than the other three approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.