Abstract

Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

Highlights

  • In daily conversations, speech is heard but it is seen – auditory speech is typically accompanied by congruent visual speech

  • Studies that have looked at audiovisual speech have consistently identified the posterior superior temporal sulcus as a site that appears to support audiovisual integration in that it typically shows greater activity for audiovisual speech compared to audio- or visualspeech alone [10,11,12,13,14]

  • Whole brain group analysis results In group analysis, a contrast of the Audiovisual Condition compared with the Auditory-Speech Only condition (AV>A) yielded activation in left posterior superior temporal sulcus, left middle temporal gyrus and right superior temporal gyrus (q

Read more

Summary

Introduction

Speech is heard but it is seen – auditory speech is typically accompanied by congruent visual speech. Individuals with early onset hearing loss often rely on visual cues for accurate perception [3] and cochlear implant users demonstrate a greater reliance on visual speech cues than those with normal hearing [4]. This suggests that auditory and visual interactions are an important aspect of speech perception. Studies that have looked at audiovisual speech have consistently identified the posterior superior temporal sulcus (pSTS) as a site that appears to support audiovisual integration in that it typically shows greater activity for audiovisual speech compared to audio- or visualspeech alone [10,11,12,13,14]. Activation in the STS is correlated with behavioral performance on an audiovisual speech integration task [15] and stimulation of the STS interferes with audiovisual speech integration [16], demonstrating the region’s causal role in the process

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.