Abstract

Training with audio-visual speech improves subsequent auditory speech perception more so than training with auditory-alone speech (e.g., Bernstein, et al., 2013). What is the source of this bimodal training advantage? One explanation is that perceivers rely on learned bimodal associations. Alternatively, perceivers could be exploiting natural, amodal regularities available in both the auditory and visual signals. To test this question, multisensory training stimuli were tested for which observers had no bimodal associative experience. It is known that felt articulations —acquired by placing a hand on a speaker’s face— can provide information for speech perception (see Trielle et al., 2014, for a review). Importantly, these effects are found in participants with no prior experience perceiving speech through touch. If training with audio-haptic speech improves auditory speech perception, then this bimodal advantage cannot be due to learned associations but likely reflects sensitivity to amodal information. To test this hypothesis, participants either heard, or heard and felt, a speaker’s speech. Participants subsequently identified words from a set of novel audio-alone sentences. Preliminary data indicate that audio-haptic speech training improves subsequent auditory-only perception more than audio-only training. These results challenge a learned association account for the bimodal training advantage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.