Abstract

Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG), superior temporal sulcus (STS), and the fusiform face area (FFA). It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearranged. Response amplitudes were reduced in the AMG and STS when subjects viewed stimuli whose configuration of parts were digitally rearranged, suggesting that these regions represent the 1st order configuration of face parts. In contrast, response amplitudes in the FFA showed little modulation whether face parts were rearranged or if the natural face parts were replaced with lines. Instead, FFA responses were reduced only when both configural and part information were reduced, revealing an interaction between these factors, suggesting distinct representation of 1st order face configuration and parts in the AMG and STS vs. the FFA.

Highlights

  • Human faces convey socially relevant information about emotion, intention and identity

  • Repeated measures analysis of variance on response latencies during the one-back task across visual stimuli showed a significant effect of visual stimulus category when we included all visual stimuli in Experiment 1, but not when we limited the comparison to face stimuli in a post-hoc analysis in Experiment 1 [all stimuli: F(4, 17) = 3.10, P = 0.03; face stimuli: F(1, 17) = 0.25, P = 0.63]

  • Differential fMRI Responses to Natural and Rearranged Faces: Voxel-wise Group Analysis To determine regions across the brain that respond to the 1st order configural information in faces, we examined the contrast of natural faces > rearranged natural faces

Read more

Summary

Introduction

Human faces convey socially relevant information about emotion, intention and identity. Face selective regions along the superior temporal sulcus (STS) are involved in detecting facial movements associated with eye gaze, speech, and expression of emotions and intentions (Puce et al, 1998; Allison et al, 2000; Thompson et al, 2007; Cohen Kadosh et al, 2010; Esterman and Yantis, 2010). Much research on the faceprocessing network has focused on elucidating the distinct functional properties of each region, the interactions among these regions, and their common pathways. It remains unknown what specific facial cues differentially engage these brain regions in face processing

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call