Abstract

Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements.

Highlights

  • Most of our percepts of the world are multisensory

  • In addition to supramodal features, previous experience with specific crossmodal combinations might influence multisensory binding: if certain crossmodal combinations repeatedly coincide in the environment, we learn through exposure that specific sensory events belong together

  • The stimulus onset asynchrony (SOA) between an auditory and visual stimulus needed to be larger for semantically congruent audio-visual speech pairs than for incongruent audio-visual speech pairs before participants noticed a difference in temporal onset[9, 10]

Read more

Summary

Introduction

Most of our percepts of the world are multisensory. For example, preparing a meal provides us with tactile, visual, olfactory and auditory information (e.g., when washing and cutting vegetables). No differences were found in temporal order judgments between such semantically congruent and incongruent crossmodal stimuli These results suggest that learning-induced multisensory binding in language might be highly specific. In order to fully understand the influence of prior knowledge on multisensory binding, it is necessary to experimentally manipulate crossmodal statistics, rather than to use overlearned stimuli such as speech or object stimuli. Participants were exposed to artificial audio-visual combinations (videos) prior to a simultaneity judgment task During this learning phase, participants were instructed to pay attention to the crossmodal combination of the auditory-visual stimuli (see Supplementary Table S1). We expected a higher likelihood of perceived simultaneity for the learned crossmodal stimuli in comparison to any other condition due to a stronger prior[6, 15] or assumption of unity[4] that their elements belong to the same object

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.