Abstract

<p><span>Automatic emotion recognition is active research in analyzing human’s emotional state over the past decades. It is still a challenging task in computer vision and artificial intelligence due to its high intra-class variation. The main advantage of emotion recognition is that a person’s emotion can be recognized even if he is extreme away from the surveillance monitoring since the camera is far away from the human; it is challenging to identify the emotion with facial expression alone. This scenario works better by adding visual body clues (facial actions, hand posture, body gestures). The body posture can powerfully convey the emotional state of a person in this scenario. This paper analyses the frontal view of human body movements, visual expressions, and body gestures to identify the various emotions. Initially, we extract the motion information of the body gesture using dense optical flow models. Later the high-level motion feature frames are transferred to the pre-trained convolutional neural network (CNN) models to recognize the 17 various emotions in Geneva multimodal emotion portrayals (GEMEP) dataset. In the experimental results, AlexNet exhibits the architecture's effectiveness with an overall accuracy rate of 96.63% for the GEMEP dataset is better than raw frames and 94% for visual geometry group-19 VGG-19, and 93.35% for VGG-16 respectively. This shows that the dense optical flow method performs well using transfer learning for recognizing emotions.</span></p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call