Abstract

It is well-established that the perception of speech can be highly influenced by visible articulatory information. Recently, Irwin et al. (2017) demonstrated a robust effect in which visual speech cues perceptually “restore” a speech sound that has been acoustically weakened. Here we investigated the nature of the visual information that elicits this perceptual illusion. To accomplish this, we utilized an oddball paradigm in which perceivers were presented with acoustic /ba/ (the more frequently occurring standard stimulus) and /a/ tokens (the infrequently presented deviant stimulus). The acoustic tokens were dubbed with three types of video tokens: (1) A full face articulating /ba/; (2) the same face articulating /ba/ but with the oral-facial region pixelated; or (3) a point-light facial display of the produced /ba/ that depicted the isolated kinematics of the visible lip movements. Results indicated that perceivers showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the presence of the natural talking face, but not in the presence of either the pixelated or schematic (point-light) faces. These findings suggest that the extracted kinematic information may not be sufficient to elicit the restoration effect, or that isolated kinematic cues do not integrate with acoustic speech in a robust manner.It is well-established that the perception of speech can be highly influenced by visible articulatory information. Recently, Irwin et al. (2017) demonstrated a robust effect in which visual speech cues perceptually “restore” a speech sound that has been acoustically weakened. Here we investigated the nature of the visual information that elicits this perceptual illusion. To accomplish this, we utilized an oddball paradigm in which perceivers were presented with acoustic /ba/ (the more frequently occurring standard stimulus) and /a/ tokens (the infrequently presented deviant stimulus). The acoustic tokens were dubbed with three types of video tokens: (1) A full face articulating /ba/; (2) the same face articulating /ba/ but with the oral-facial region pixelated; or (3) a point-light facial display of the produced /ba/ that depicted the isolated kinematics of the visible lip movements. Results indicated that perceivers showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the pre...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call