Abstract

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.

Highlights

  • Human facial expressions communicate various types of information about emotional states, social motives and intentions [1]

  • The first conclusion was that the facial expression of basic emotions encompassed a complex pattern of facial actions

  • The emotional expressions that were recognized with high accuracy from few actions benefited from additional actions that allowed faster and/or more accurate recognition

Read more

Summary

Introduction

Human facial expressions communicate various types of information about emotional states, social motives and intentions [1]. Despite the use of distinct facial actions for these last emotional categories, Westerners showed less clear-cut categorical boundaries between these categories than between others [8], with frequent confusions between, for example, anger and disgust or fear and surprise [9]. For these reasons, becoming an efficient decoder of facial expressions, with a particular cultural background, is a challenge during both infancy and childhood. An impaired ability to recognize facial emotions has been associated with a broad range of neurological or psychiatric diseases [13,14,15,16,17], which suggests that distinct cognitive functions (e.g., attention, ability to process complex stimuli [13], and verbal skills [18, 19]) strongly participate in the ability to decode facial emotions

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call