Abstract

The superior temporal sulcus (STS) is a major component of the human face perception network, implicated in processing dynamic changeable aspects of faces. However, it remains unknown whether STS holds functionally segregated subdivisions for different categories of facial movements. We used high-resolution functional magnetic resonance imaging (fMRI) at 7T in 16 volunteers to compare STS activation with faces displaying angry or happy expressions, eye-gaze shifts and lip-speech movements. Combining univariate and multivariate analyses, we show a systematic topological organization within STS, with gaze-related activity predominating in the most posterior and superior sector, speech-related activity in the anterior sector and emotional expressions represented in the intermediate middle STS. Right STS appeared to hold a finer functional segregation between all four types of facial movements, and best discriminative abilities within the face-selective posterior STS (pSTS). Conversely, left STS showed greater overlap between conditions, with a lack of distinction between mouth movements associated to speech or happy expression and better discriminative abilities (for gaze and speech vs emotion conditions) outside pSTS. Differential sensitivity to upper (eye) or lower (mouth) facial features may contribute to, but does not appear to fully account for, these response patterns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call