Abstract

Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

Highlights

  • The different senses are not treated as separate modules in our brain, rather they interact with one another

  • The model was used in its basal configuration to simulate conditions leading to ventriloquism effect

  • In this work we propose that a simple neural network, consisting of two spatially organized unimodal layers with different receptive fields and connections in spatial register can explain the ventriloquism effect

Read more

Summary

Introduction

The different senses are not treated as separate modules in our brain, rather they interact with one another. A useful approach to investigate cross-modal interactions is to create conflict situations in which discordant information are provided by two different sensory modalities. Several studies showed that visual bias of auditory location occurs with complex and meaningful stimuli, and with neutral and simple stimuli, such as spots of light and tone or noise bursts [2,6,7,8] These studies suggest that the shift of auditory location cannot be ascribed only to cognitive factors or voluntary strategies but is due – at least partly – to a phenomenon of automatic attraction of the sound by the simultaneous and spatially separate visual input

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.