Abstract

We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

Highlights

  • The ability to make rapid judgements about the emotional states of conspecifics from their facial displays is a fundamental component of the human neurocognitive system [1]

  • The historic reliance on non-moving stimuli is counterintuitive as real-life facial expressions are dynamic, and correspondingly dynamic stimulus materials should promote ecological validity

  • Facial expressions of affect are often explicitly characterised in terms of dynamic actions (e.g., [6]), and the predominant neurocognitive models of face processing [7] emphasise separable processing mechanisms for dynamic aspects of faces

Read more

Summary

Introduction

The ability to make rapid judgements about the emotional states of conspecifics from their facial displays is a fundamental component of the human neurocognitive system [1]. The historic reliance on non-moving stimuli is counterintuitive as real-life facial expressions are dynamic, and correspondingly dynamic stimulus materials should promote ecological validity. Facial expressions of affect are often explicitly characterised in terms of dynamic actions (e.g., [6]), and the predominant neurocognitive models of face processing [7] emphasise separable processing mechanisms for dynamic aspects of faces. That the preponderance of studies conducted in the area has used static pictures of facial affect may, in part, reflect difficulties in achieving adequately controlled stimuli. It reflects historical limitations in terms of stimulus delivery systems (which recent work suggests may be surmountable [8])

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.