Abstract

Identifying emotional sentiment projected in an image is a tedious task, considering the fact that sentiment represented by an image could depend on a very diverse set of factors. This paper presents a novel approach to predict the emotional sentiment of a group of people in a variety of environments. The proposed technique uses local facial features of subjects along with global scene features to estimate the type of emotional sentiment in group-level emotion recognition. Two separate convolutional neural networks based on different architectures are designed to predict group-level emotions into three categories: negative, neutral and positive. The first convolutional neural network referred as Scene-model, learns the global features in data. A novel partial fine-tuning process is proposed to train the model on task-specific data. The second convolutional model referred as Face-model is trained on facial expression datasets to learn the emotional status of subjects in an image. Joint distribution of the global (scene) and local (face) features is modeled using long short-term memory networks. This joint distribution is converted into class scores using softmax regression-based model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call