Abstract

The emotional expression of the face provides an important social signal that allows humans to make inferences about other people's state of mind. However, the underlying brain mechanisms are complex and still not completely understood. Using magnetoencephalography (MEG), we analyzed the spatiotemporal structure of regional electrical brain activity in human adults during a categorization task (faces or hands) and an emotion discrimination task (happy faces or neutral faces). Brain regions that are specifically important for different aspects of processing emotional facial expressions showed interesting hemispheric dominance patterns. The dorsal brain regions showed a right predominance when participants paid attention to facial expressions: The right parietofrontal regions, including the somatosensory, motor/premotor, and inferior frontal cortices showed significantly increased activation in the emotion discrimination task, compared to in the categorization task, in latencies of 350 to 550 ms, while no activation was found in their left hemispheric counterparts. Furthermore, a left predominance of the ventral brain regions was shown for happy faces, compared to neutral faces, in latencies of 350 to 550 ms within the emotion discrimination task. Thus, the present data suggest that the right and left hemispheres play different roles in the recognition of facial expressions depending on cognitive context.

Highlights

  • The capacity to recognize facial expressions is one of the most important abilities in human social interaction

  • It is well known that humans can discriminate at least six emotional expressions: happiness, surprise, fear, sadness, anger, and disgust [1,2]

  • The model assumes two major neural pathways, one to process invariant aspects of faces leading to facial identification, and another to process changeable aspects of faces such as eye gaze, expression, and lip movements

Read more

Summary

Introduction

The capacity to recognize facial expressions is one of the most important abilities in human social interaction. Behavioral studies have shown dissociation between the processing of facial identity and facial expression [10,11]. These observations support Bruce and Young’s (1986) [12] cognitive model of face recognition, which proposed distinct module-based processing pathways for facial identification, emotional expression, and speech-related facial movements. The model assumes two major neural pathways, one to process invariant aspects of faces leading to facial identification, and another to process changeable aspects of faces such as eye gaze, expression, and lip movements

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call