Abstract

Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.

Highlights

  • Detecting emotional processes in humans is important in many research fields such as psychology, affective neuroscience, or political science

  • Analysis shows that the non-standardized facial expressions are perceived as much more genuine compared to the standardized facial expressions [standardized inventories: M = 4.00, SD = 1.43; nonstandardized inventory: M = 5.64, SD = 0.79; t(2606) = 36.58, p < 0.001, d = 1.44]

  • Non-standardized facial expressions are rated as more genuine for anger, t(426) = 27.97, p < 0.001, d = 2.75, sadness, t(418) = 25.55, p < 0.001, d = 2.43, fear, t(317) = 21.10, p < 0.001, d = 2.38, disgust, t(263) = 18.10, p < 0.001, d = 2.36, surprise, t(322) = 16.02, p < 0.001, d = 1.79, and joy, t(441) = 5.58, p < 0.001, d = 0.54, whereas among the standardized inventories neutral facial expressions are rated more genuine, t(407) = 2.36, p = 0.019, d = 0.24. These results support the validity of the selection of image test data— the standardized facial expressions are perceived less genuine compared to the non-standardized facial expressions

Read more

Summary

Introduction

Detecting emotional processes in humans is important in many research fields such as psychology, affective neuroscience, or political science. The classic approach to analyse emotional facial responses is either an expert observation such as the Facial Action Coding System (FACS) (Sullivan and Masters, 1988; Ekman and Rosenberg, 1997; Cohn et al, 2007) or direct measurement of facial muscle activity with electromyography (EMG) (Cohn et al, 2007). Both are, time-consuming with respect to both, application and analysis. The technology is used in consumer and market research, for example to predict advertisement efficiency (Lewinski et al, 2014; Teixeira et al, 2014; Bartkiene et al, 2019)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call