Abstract

Natural human facial expressions of emotion are dynamic events that progressively unfold over time. The question of whether humans can better recognize dynamic than static expressions has been a source of recent debate (e.g., Edwards, 1998; Ambadar, Schooler & Cohn, 2005; Fiorentini & Viviani, 2011; Landers, Christie & Bruce, 1999). Here, we take a novel approach to this issue by asking: 1) Does the information contained in a dynamic facial expression differ from that of a static expression? 2) How does the information content in dynamic expressions evolve over time? and 3) How efficiently do human observers make use of information when recognizing dynamic versus static facial expressions? To answer these questions, we measured both human and ideal observer contrast energy thresholds for recognizing dynamic and static facial expressions of 8 human actors (4 male, 4 female) making 6 different expressions of emotion (anger, disgust, fear, happiness, sadness, surprise). Dynamic stimuli evolved from a neutral to a full expression of emotion over the course of 30 frames (~1 second). Corresponding static stimuli were created by repeating the final frame of each dynamic stimulus for 30 frames. Ideal observer simulations revealed significantly lower thresholds for static than dynamic expressions, indicating more information was available in the static than the dynamic expressions. Additionally, a frame-by-frame ideal observer analysis of the dynamic expressions revealed a monotonic decrease in ideal thresholds across frames, indicating that the amount of information available at a given moment during the production of an expression systematically increased over time. Similar to the ideal observer, human thresholds were significantly lower for static than dynamic expressions, yielding efficiencies (ideal/human thresholds) that were nearly identical across conditions. These results support the idea that the presence of dynamic cues offers no discernible processing efficiency advantage for human observers when recognizing facial expressions of emotion. Meeting abstract presented at VSS 2012

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call