Abstract

Research indicates that auditory and visual information is integrated during the perception of speech. Conflicting auditory and visual stimuli can result in an illusory experience known as the McGurk effect (e.g., auditory /bav/ dubbed onto a face saying /gav/ results in a perception of ‘‘dav’’). This study used a priming paradigm to investigate whether a phonemic representation for the auditory portion of a McGurk stimulus is active after the illusory phoneme is experienced. Subjects were given (nonword) prime-target conditions, including: (1) McGurk (e.g., Prime auditory /bav/ + visual /gav/ = ‘‘dav;’’ Target auditory /bav/); (2) Incongruent (e.g., Prime auditory-visual /mav/, Target auditory /bav/); (3) Identity (e.g., Prime auditory-visual /yav/, Target auditory /yav/). Results show that mean reaction times to repeat targets were fastest in the identity condition. Response times for the McGurk and incongruent conditions were indistinguishable from one another and significantly slower than the identity condition. This finding suggests that once the auditory and visual information is combined and a phonemic representation is made, the actual auditory signal is no longer available to affect processing of the target. [This work is based on ideas developed by the late Kerry P. Green and supported by NSF.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.