Abstract

Automatic facial expression recognition (FER) is one of the most challenging tasks in computer vision. FER admits a wide range of applications in human–computer interaction, behavioral psychology, and human expression synthesis. Extensive works have been reported in this field, mainly, based on handcrafted features. However, it is a challenging task to accurately extract all the correlated handcrafted features due to the effect of variations caused by emotional state. Therefore, there is a quest for further research on accurately extracting relevant features that can capture changes in facial expressions (FEs) with high fidelity. In this study, we propose FER-net: a convolution neural network to distinguish FEs efficiently with the help of the softmax classifier. We implement our method FER-net along with twenty-one state-of-the-art methods and test them on five benchmarking datasets, namely FER2013, Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, and Real-world Affective Faces. Seven FEs, namely neutral, anger, disgust, fear, happiness, sadness, and surprise, are considered in this work. The average accuracies on these datasets are 78.9%, 96.7%, 97.8%, 82.5% and 81.68%, respectively. The obtained results demonstrate that FER-net is preeminent in comparison with twenty-one state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call