Abstract

Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.

Highlights

  • Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer

  • To test for biases in the initial eye movements based on emotional content, we first analysed the initial eye movements for trials where the two faces were presented at the same time and expressed different emotions

  • We found that initial eye movements can be predicted using the differences in either the spatial-structure information or the spatial-frequency contrast information in the face images (Figs. 5 and 6)

Read more

Summary

Introduction

Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. While humans demonstrate the ability to express a multitude of emotional expressions, the general consensus among research into emotional expressions is that humans invariably display six discrete affects: anger, fear, disgust, happiness, surprise and ­sadness[6,7] These expressions deviate from the standard facial musculature configuration; the neutral expression. The authors conclude that the state of the mouth alone might be sufficient for explaining differences in search efficiency across happy and angry expression They propose the display of teeth as the primary candidate mechanism for this d­ ifference[17]. Frischen, Eastwood & Smilek suggest that displaying teeth in emotional expressions produces a detection advantage because of increased contrast in the mouth area relative to a closed ­mouth[18] These findings suggest that attention effects towards emotional expressions may be better explained in terms of low-level image features, rather than the emotional content of faces. This raises questions about what exact properties are relevant and how they relate to emotional faces

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call