Abstract

SummaryHuman facial expressions are complex, multi-component signals that can communicate rich information about emotions,1, 2, 3, 4, 5 including specific categories, such as “anger,” and broader dimensions, such as “negative valence, high arousal.”6, 7, 8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information—i.e., specific categories and broader dimensions—via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver’s perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10, 11, 12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent—i.e., multiplex—categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results—based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms—show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.

Highlights

  • Human facial expressions are complex dynamic signals composed of combinations of individual facial movements called action units (AUs)14,15—for example, smiles often comprise lip corner puller (AU12) and cheek raiser (AU6) and scowls often comprise brow lowerer (AU4), lid tightener (AU7), and upper lip raiser (AU10).[16]

  • Mapping facial expression signals of emotion categories and dimensions Having modeled the facial expression signals that elicit the perception of emotion categories and of dimensions, we examined whether they share certain facial movements by mapping the former onto the latter and examining their embedding

  • These results show that facial expression signals that elicit the perception of emotion categories are embedded into those that elicit dimensional perceptions, suggesting that a latent set of shared AU jointly represent—i.e., multiplex—emotion category and dimensional information

Read more

Summary

Graphical Abstract

Liu et al examine how facial expressions signal broad-plus-specific emotion category and dimensional information. Using a perception-based facial-signalmodeling technique and informationtheoretic analyses, they find a latent set of facial signals that can multiplex categorical and dimensional information and a subset uniquely signaling either. Highlights d Examined facial signals of broad-plus-specific emotion categories and dimensions d Used data-driven, perception-based modeling and information-theoretic analyses d Disentangled facial signals that multiplex broad-plus-specific emotion information d Provides insights into facial expression ontology and new methodological framework

SUMMARY
RESULTS AND DISCUSSION
Conclusions and future directions
METHOD DETAILS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call