Abstract

Six ‘universal’ facial expressions – ‘Happy,’ ‘Surprise,’ ‘Fear,’ ‘Disgust,’ ‘Anger,’ and ‘Sadness’ – are defined by specific, static patterns of facial muscle activation (Facial Action Coding System codes, FACS). However, systematic differences in facial expression recognition between Western Caucasians (WC) and East Asians (EA) question the notion of universality, raising a new question: How do different cultures represent facial expressions? Here, we derived culture-specific models of facial expressions using state-of-the-art 4D imaging (dynamics of 3D face shape and texture) combined with reverse correlation techniques. Specifically, we modelled 41 core Action Units (AUs, groups of facial muscles) from certified FACS coders and parameterized each using 6 temporal parameters (peak amplitude; peak latency; onset latency; offset latency; acceleration; deceleration). The 41 AUs and their parameters formed the basis of a pseudo-random generative model of expressive signals. On each trial, we pseudo-randomly selected parametric values for each AU, producing an expressive facial animation (see Figure S1 in Supplementary Material). Ten WC and 10 EA na & iuml;ve observers each categorized 9,600 such animations according to the 6 emotion categories listed above and rated the perceived intensity of the emotion (see Figure S1 in Supplementary Material). We then reverse correlated the dynamic properties of the AUs with the emotion categories they elicited, producing “dynamic classification models” (i.e., expected 4D face information) per emotion and observer. Analyses of the models reveal clear cultural contrasts in (a) the presence/absence of specific AUs predicting the reported EA miscategorizations and (b) radically different temporal dynamics of emotional expression whereby EA observers expect “smoother” emotional displays with lower acceleration and amplitude (see link in Supplementary Material for example videos). For the first time, we reveal cultural diversity in the dynamic signals representing each basic emotion, demonstrating that the complexities of emotion cannot adequately be reduced to a single set of static ‘universal’ signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.