Abstract

This study investigated the extent to which audiovisual speech integration is special by comparing behavioral and neural measures using both speech and non-speech stimuli. An audiovisual recognition experiment presenting listen- ers with auditory, visual, and audiovisual stimuli was implemented. The auditory component consisted of sine wave speech, and the visual component consisted of point light displays, which include point-light dots that highlight a talker's points of articulation. In the first phase, listeners engaged in a discrimination task where they were unaware of the linguis- tic nature of the auditory and visual stimuli. In the second phase, they were informed that the auditory and visual stimuli were spoken utterances of /be/ (bay) and /de/ (day), and they engaged in the same task. The neural dynamics of audiovisual integration was investigated by utilizing EEG, including mean Global Field Power and current density recon- struction (CDR). As predicted, support for divergent regions of multisensory integration between the speech and non- speech stimuli was obtained, namely greater posterior parietal activation in the non-speech condition. Conversely, reac- tion-time measures indicated qualitatively similar multisensory integration across experimental conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call