Abstract

Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.

Highlights

  • Facial expressions are crucial for social communication of human as well as non-human primates (Calder, 2011; Darwin, 1872; Jack and Schyns, 2017; Curio et al, 2010), and humans can learn facial expressions even of other species (Nagasawa et al, 2015)

  • Since natural video stimuli provide no accurate control of the dynamics and form features of facial expressions, it is unknown how expression dynamics is perceptually encoded across different primate species and how it interacts with the representation of facial shape

  • Our studies investigated the perceptual representations of dynamic human and monkey facial expressions in human observers, exploiting photo-realistic human and monkey face avatars (Figure 1A)

Read more

Summary

Introduction

Facial expressions are crucial for social communication of human as well as non-human primates (Calder, 2011; Darwin, 1872; Jack and Schyns, 2017; Curio et al, 2010), and humans can learn facial expressions even of other species (Nagasawa et al, 2015). The structure and arrangement of facial muscles is highly similar across different primate species (Vick et al, 2007; Parr et al, 2010), while face shapes differ considerably, for example, between humans, apes, and monkeys This motivates the following two hypotheses: (1) The phylogenetic continuity in motor control should facilitate fast learning of dynamic expressions across primate species and (2) the different. The second hypothesis seems consistent with a variety of data in functional imaging, which suggests a partial separation of the anatomical structures processing changeable and non-changeable aspects of faces (Haxby et al, 2000; Bernstein and Yovel, 2015) We investigated these hypotheses, exploiting advanced methods from computer animation and machine learning, combined with motion capture in monkeys and humans. They specify fundamental constraints for the computational neural mechanisms of dynamic face processing and challenge popular neural network models, accounting for expression recognition by the learning of sequences of key shapes (e.g. Curio et al, 2010)

Results
A Histograms
C Motion Flow
Discussion
Materials and methods
Funding Funder
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call