Abstract

It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

Highlights

  • Even though speech is primarily conveyed acoustically, speech comprehension is influenced by the visible facial kinematics of the speaker

  • Familiarity with a speaker’s face can affect speech comprehension even under auditory-only listening conditions; a brief familiarization with a speaker’s visual or audiovisual speaking dynamics increases the subsequent recognition performance in auditory-only speech recognition tasks [5,6]. Another aspect of auditory communication, i.e., the recognition of the speaker’s identity, is improved [5,7,8]. These behavioral benefits are associated with the activation of facesensitive brain areas, i.e., the face-movement sensitive posterior superior temporal sulcus (STS), which is associated with recognition of what is said, and the face-identity sensitive fusiform face area (FFA), which is associated with recognition of who is speaking

  • The main finding of this study is that visual face-movement sensitive areas in the left posterior STS (pSTS) communicate with an auditory speech-intelligibility sensitive area in the left aSTG/S during auditory-only speech recognition

Read more

Summary

Introduction

Even though speech is primarily conveyed acoustically, speech comprehension is influenced by the visible facial kinematics of the speaker. Lips and tongue of a speaker improves speech comprehension substantially [1,2,3]. Familiarity with a speaker’s face can affect speech comprehension even under auditory-only listening conditions; a brief familiarization with a speaker’s visual or audiovisual speaking dynamics increases the subsequent recognition performance in auditory-only speech recognition tasks [5,6]. Another aspect of auditory communication, i.e., the recognition of the speaker’s identity, is improved [5,7,8]. These behavioral benefits are associated with the activation of facesensitive brain areas, i.e., the face-movement sensitive posterior STS (pSTS), which is associated with recognition of what is said (speech recognition), and the face-identity sensitive fusiform face area (FFA), which is associated with recognition of who is speaking (voice recognition)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.