Abstract

Facial Expression Recognition (FER) has long been a challenging task in the field of computer vision. Most of the existing FER methods extract facial features on the basis of face pixels, ignoring the relative geometric position dependencies of facial landmark points. This article presents a hybrid feature extraction network to enhance the discriminative power of emotional features. The proposed network consists of a Spatial Attention Convolutional Neural Network (SACNN) and a series of Long Short-term Memory networks with Attention mechanism (ALSTMs). The SACNN is employed to extract the expressional features from static face images and the ALSTMs is designed to explore the potentials of facial landmarks for expression recognition. A deep geometric feature descriptor is proposed to characterize the relative geometric position correlation of facial landmarks. The landmarks are divided into seven groups to extract deep geometric features, and the attention module in ALSTMs can adaptively estimate the importance of different landmark regions. By jointly combining SACNN and ALSTMs, the hybrid features are obtained for expression recognition. Experiments conducted on three public databases, FER2013, CK+, and JAFFE, demonstrate that the proposed method outperforms the previous methods, with the accuracies of 74.31%, 95.15%, and 98.57%, respectively. The preliminary results of Emotion Understanding Robot System (EURS) indicate that the proposed method has the potential to improve the performance of human-robot interaction.

Highlights

  • Facial expressions, which convey useful nonverbal cues in daily social communication, are one of the most important features for recognizing the emotional states of human beings

  • The improvement in accuracy is caused by the geometry-level feature extracted from facial landmarks, which proves the geometric feature extracted from facial landmarks are beneficial to improve the expression recognition performance

  • It may be explained that the attention mechanism assigns larger weights to relative positional dependencies of the facial landmarks in the areas associated with facial expressions

Read more

Summary

Introduction

Facial expressions, which convey useful nonverbal cues in daily social communication, are one of the most important features for recognizing the emotional states of human beings. Due to its potential applications in a multiple of research fields, such as affective computing [1], computer vision [2], medical assessment [3], and Human-Robot Interaction (HRI) [4], Facial Expression Recognition (FER) has drawn an upsurge of interest in recent years. Numerous studies have been conducted on emotion recognition problems in facial expression images during the last decades. Distinguishing facial expressions accurately remains a challenging task because the irrelevant facial information impacts. The irrelevant information comes from variant poses, partial occlusion (e.g. hair, glasses), and background clutter. Capturing and representing the most discriminative expression-related features is a key issue to be addressed in facial expression analysis

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.