Abstract

It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

Highlights

  • Facial expression is an important medium for social communication as it conveys information about others’ emotion

  • We found conventional face-selective areas, including the insula, inferior frontal gyrus, superior temporal gyrus, lateral occipital cortex; temporal fusiform cortex and amygdala, which were commonly studied in previous functional magnetic resonance imaging (fMRI) studies on facial expression perception (Fox C.J. et al, 2009; Trautmann et al, 2009; Furl et al, 2013, 2015; Johnston et al, 2013; Harris et al, 2014)

  • Using multivariate connectivity pattern analysis and machine learning algorithm, we found the successful decoding of both static and dynamic facial expressions based on the functional connectivity (FC) patterns

Read more

Summary

Introduction

Facial expression is an important medium for social communication as it conveys information about others’ emotion. The mechanism under which enables human brain achieving the efficient recognition of facial expressions is intensively studied. The usual way in exploring facial expression perception is recoding the brain activity patterns while participants are presented with facial stimuli. Jin et al (2012, 2014a,b) have made a Decoding Facial Expressions via fcMVPA lot of efforts on the stimulus presentation approaches with face stimuli. Previous fMRI studies on facial expression perception mainly employed static expression images as stimuli (Gur et al, 2002; Murphy et al, 2003; Andrews and Ewbank, 2004). Recent studies with dynamic stimuli have found enhanced brain activation patterns compared with static stimuli and found that in addition to the conventional face-selective areas, motion-sensitive areas significantly responded to facial expressions (Furl et al, 2012, 2013, 2015)

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call