Abstract

Previous studies on multimodal integration in speech perception have found that not only auditory and visual cues, but also tactical sensation—such as an air-puff on skin that simulates aspiration—can be integrated in the perception of speech sounds (Gick & Derrick, 2009). However, most previous investigations have been conducted with English listeners, and it remains uncertain whether such multisensory integration can be shaped by linguistic experience. The current study investigates audio-aerotactile integration in phoneme perception for three groups: English, French monolingual and English-French bilingual listeners. Six step VOT continua of labial (/ba/—/pa/) and alveolar (/da/—/ta/) stops constructed from both English and French endpoint models were presented to listeners who performed a forced-choice identification task. Air-puffs synchronized to syllable onset and applied to the hand at random increased the number of ‘voiceless’ responses for the /da/—/ta/ continuum by both English and French listeners, which suggests that audio-aerotactile integration can occur even though some of the listeners did not have aspiration/non-aspiration contrasts in their native language. Furthermore, bilingual speakers showed larger air-puff effects for English stimuli compared to English monolinguals, which suggests a complex relationship between linguistic experience and multisensory integration in perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call