Abstract

The eye and mouth regions serve as the primary sources of facial information regarding an individual’s emotional state. The aim of this study was to provide a comprehensive assessment of the relative importance of those two information sources in the identification of different emotions. The stimuli were composite facial images, in which different expressions (Neutral, Anger, Disgust, Fear, Happiness, Contempt, and Surprise) were presented in the eyes and the mouth. Participants (21 women, 11 men, mean age 25 years) rated the expressions of 7 congruent and 42 incongruent composite faces by clicking on a point within the valence-arousal emotion space. Eye movements were also monitored. With most incongruent composite images, the perceived emotion corresponded to the expression of either the eye region or the mouth region or an average of those. The happy expression was different. Happy eyes often shifted the perceived emotion towards a slightly negative point in the valence-arousal space, not towards the location associated with a congruent happy expression. The eye-tracking data revealed significant effects of congruency, expressions and interaction on total dwell time. Our data indicate that whether a face that combines features from two emotional expressions leads to a percept based on only one of the expressions (categorical perception) or integration of the two expressions (dimensional perception), or something altogether different, strongly depends upon the expressions involved.

Highlights

  • People are experts in the processing of the visual information provided by human faces [1,2]

  • The average valence-arousal (V-A)–ratings for the congruent stimuli were in striking agreement with the multidimensional scaling (MDS) analysis of the terms that participants used to describe the expressions (Fig 2B)

  • We provide the fdr corrected p-values and the effect sizes of all pairwise comparisons, as well as rating averages plotted in the V-A space as supplementary Figures (S1 Fig)

Read more

Summary

Introduction

People are experts in the processing of the visual information provided by human faces [1,2]. When interpreting the emotional expression on a face, the eye and mouth areas represent the primary sources of information [3]. The emotional expressions produced by the mouth region may carry more bottom-up saliency and, may constitute a potentially more reliable source of information [4]. Research indicates that the relative roles of the eye region (i.e., the top half) and the mouth region (i.e., the bottom half) differ between emotional expressions. A study with partially masked faces (bubbles) found that individuals classified expressions as happy, surprised and disgusted primarily based on the mouth region, and anger primarily on the eyes

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.