Abstract

A growing literature provides evidence for the importance of synchronicity of cross-modal information in speech perception [e.g., audio-visual, Munhall et al. 1996, Perception & Psychophysics 58 : 351-362; audio-aerotactile, Gick et al. 2010, JASA 128: 342-346; visual-aerotactile, Bicevskis et al. submitted ms]. While considerable work has investigated this role of temporal congruence, no research has directly explored the role of spatial congruence (i.e., co-directionality) of stimulus sources. If perceivers are picking up a localized distal speech event [e.g., Fowler 1986, Status Report of Speech Research: 139-169] cross-modal sources of information are predicted to be more likely to integrate when presented codirectionally than contradirectionally. An audio-aerotactile pairing lends itself well to this question as both modalities can easily be presented laterally. The current study draws on methodology from previous work [Gick & Derrick 2009, Nature 462: 502-504] to ask whether cross-modal integration persists when cross-modal cues are spatially incongruent. Native English perceivers were presented with syllables contrasting in aspiration and embedded in noise, with some tokens accompanied by inaudible air puffs applied to the neck; aerotactile source locations either matched or opposed the spatial direction of the acoustic signal. Implications of results for multimodal integration theories will be discussed. [Funded by NIH Grant DC-02717 and NSERC.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call