Abstract

The nature of interactions between the senses is a topic of intense interest in neuroscience, but an unresolved question is how sensory information from hearing and vision are combined when the two senses interact. A problem for testing auditory-visual interactions is devising stimuli and tasks that are equivalent in both modalities. Here we report a novel paradigm in which we first equated the discriminability of the stimuli in each modality, then tested how a distractor in the other modality affected performance. Participants discriminated pairs of amplitude-modulated tones or size-modulated visual objects in the form of a cuboid shape, alone or when a similarly modulated distractor stimulus of the other modality occurred with one of the pair. Discrimination of sound modulation depth was affected by a modulated cuboid only when their modulation rates were the same. In contrast, discrimination of cuboid modulation depth was little affected by an equivalently modulated sound. Our results suggest that what observers perceive when auditory and visual signals interact is not simply determined by the discriminability of the individual sensory inputs, but also by factors that increase the perceptual binding of these inputs, such as temporal synchrony.

Highlights

  • Our senses have evolved to detect different stimulus energies, but they seldom work in isolation

  • Attempts to define how auditory-visual signals are weighted when they interact have been further complicated by the wide variety of different stimulus types that have been employed in auditory-visual studies, from formless, flashes, or sound bursts[4,8,14], to higher-level stimuli composed of real world objects and sounds[2,3,15]

  • In Experiment 1, we tested whether performance on an auditory discrimination task is affected by visual information

Read more

Summary

Introduction

Our senses have evolved to detect different stimulus energies, but they seldom work in isolation. Suggest dominance is task specific depending on which modality has greater sensitivity for the task; vision dominates for judgement of spatial location, while hearing dominates in temporal tasks requiring the detection of rapidly changing stimuli[7,8,9,10] An extension of the latter view is the Bayesian approach that argues a modality’s influence on the multisensory interaction is weighted according to the reliability (inversely proportional to the variability) of the information that modality contributes to the bimodal estimate of a stimulus property[11,12,13]. Semantically meaningful, higher-level stimuli like everyday objects and speech are problematic because they may recruit post-perceptual cognitive processes and memory Overall it remains unclear whether previous results are related to auditory-visual interactions giving rise to a unified percept or to post-perceptual (e.g., decisional) processes[16]. The dominance of vision over hearing in this study, even when stimuli are discriminable, suggests that multimodal interactions depend on how readily two signals are perceptually bound

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call