Abstract

In natural communication speech perception is profoundly influenced by observable mouth movements. The additional visual information can greatly facilitate intelligibility but incongruent visual information may also lead to novel percepts that neither match the auditory nor the visual information as evidenced by the McGurk effect. Recent models of audiovisual (AV) speech perception accentuate the role of speech motor areas and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for speech perception. In this event-related 7 Tesla fMRI study we used three naturally spoken syllable pairs with matching AV information and one syllable pair designed to elicit the McGurk illusion. The data analysis focused on brain sites involved in processing and fusing of AV speech and engaged in the analysis of auditory and visual differences within AV presented speech. Successful fusion of AV speech is related to activity within the STS of both hemispheres. Our data supports and extends the audio-visual-motor model of speech perception by dissociating areas involved in perceptual fusion from areas more generally related to the processing of AV incongruence.

Highlights

  • Audiovisual (AV) integration in the perception of speech is the rule rather than the exception

  • The performance indicated that participants attentively processed the stimuli and that they experienced the McGurk illusion in a high proportion of cases

  • The present study took advantage of very high-field functional MRI to define the brain areas involved in the McGurk illusion during the AV integration of speech

Read more

Summary

INTRODUCTION

Audiovisual (AV) integration in the perception of speech is the rule rather than the exception. A striking effect, first described by McGurk and MacDonald (1976), is regularly found when mismatching auditory (e.g., /ba/) and visual (e.g., /ga/) syllables are presented: in this case many healthy persons perceive a syllable neither heard nor seen (i.e., /da/) This suggests that AV integration during speech perception occurs automatically and is more than just the visual modality giving the auditory modality a hand in the speech recognition process. To further investigate the roles of the IFG and STS in AV speech integration we used naturally spoken syllable pairs with matching AV information (i.e., visual and auditory information /ba//ba/, BABA, /ga//ga/, GAGA, or /da//da/, DADA) and one audiovisually incongruent syllable pair designed to elicit the McGurk illusion (auditory /ba//ba/ dubbed on visual /ga//ga/, M-DADA). Visual stimuli were projected via a mirror system by LCD projector onto a diffusing screen inside the magnet bore

METHODS
RESULTS
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call