Abstract

Computer-generated characters, so-called avatars, are widely used in advertising, entertainment, human–computer interaction or as research tools to investigate human emotion perception. However, brain responses to avatar and human faces have scarcely been studied to date. As such, it remains unclear whether dynamic facial expressions of avatars evoke different brain responses than dynamic facial expressions of humans. In this study, we designed anthropomorphic avatars animated with motion tracking and tested whether the human brain processes fearful and neutral expressions in human and avatar faces differently. Our fMRI results showed that fearful human expressions evoked stronger responses than fearful avatar expressions in the ventral anterior and posterior cingulate gyrus, the anterior insula, the anterior and posterior superior temporal sulcus, and the inferior frontal gyrus. Fearful expressions in human and avatar faces evoked similar responses in the amygdala. We did not find different responses to neutral human and avatar expressions. Our results highlight differences, but also similarities in the processing of fearful human expressions and fearful avatar expressions even if they are designed to be highly anthropomorphic and animated with motion tracking. This has important consequences for research using dynamic avatars, especially when processes are investigated that involve cortical and subcortical regions.

Highlights

  • While we are becoming more experienced with computergenerated characters, or avatars, that are used in animated films, social media or as human–computer interfaces, we do not know how we react and adapt to interactions with our virtual counterparts (Kätsyri et al, 2017; Hsu, 2019)

  • We used a set of videos that have been developed for the study in a three-step process in cooperation with the Zurich University of the Arts: (i) fearful and neutral human facial expressions were recorded from four actors, (ii) for each actor, a customized avatar was created by a graphic artist (Pinterac SARL, France) to match their appearance, (iii) by motion tracking with 66 tracking points (FaceRig©, Holotech Studios SRL, Romania), the actors’ recorded expressions were conveyed onto the avatar faces

  • The same difference was observable for avatar faces, with fearful avatar expressions rated as more intense (Mdn = 3) in comparison to neutral avatar expressions (Mdn = 2; exact Wilcoxon test: z = −3.16, P < 0.001)

Read more

Summary

Introduction

While we are becoming more experienced with computergenerated characters, or avatars, that are used in animated films, social media or as human–computer interfaces, we do not know how we react and adapt to interactions with our virtual counterparts (Kätsyri et al, 2017; Hsu, 2019). Central to the processing of facial expressions and facial identity is a distributed network of brain regions that responds more strongly to human faces than to other visual information (Dricu and Frühholz, 2016; Fernández Dols and Russell, 2017). In this neural face perception network, the posterior and anterior superior temporal sulcus (STS) as well as the inferior frontal gyrus (IFG) form a dorsal pathway sensitive to dynamic features of faces like facial motion and gaze. The inferior occipital gyrus, the fusiform gyrus (FG) and the anterior temporal lobe comprise the ventral pathway where invariant features of faces like form and configuration are processed (Haxby et al, 2000; Duchaine and Yovel, 2015)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call