Abstract

Abstract Studies on audio-visual interactions in sound localization have primarily focused on the relations between the spatial position of sounds and their perceived visual source, as in the famous ventriloquist effect. Much less work has examined the effects on sound localization of seeing aspects of the visual environment. In this study, we took advantage of an innovative method for the study of spatial hearing – based on real sounds, virtual reality and real-time kinematic tracking – to examine the impact of a minimal visual spatial frame on sound localization. We tested sound localization in normal hearing participants (N=36) in two visual conditions: a uniform gray scene and a simple visual environment comprising only a grid. In both cases, no visual cues about the sound sources were provided. During and after sound emission, participants were free to move their head and eyes without restriction. We found that the presence of a visual spatial frame improved hand-pointing in elevation. In addition, it determined faster first-gaze movements to sounds. Our findings show that sound localization benefits from the presence of a minimal visual spatial frame and confirm the importance of combining kinematic tracking and virtual reality when aiming to reveal the multisensory and motor contributions to spatial-hearing abilities.

Highlights

  • In humans, as well as in other animals that can hear, the ability to localize sounds in space has evolved over the years within a multisensory environment

  • We started by studying participants’ ability to discriminate sound source location in the uniform gray condition – which we considered as baseline – and we focused on our key experimental question, concerning the effects of seeing a visual grid

  • The present study examined the effect of seeing a simple visual spatial frame on sound localization, in a context in which participants were free to move their head and eyes while listening to sounds

Read more

Summary

Introduction

As well as in other animals that can hear, the ability to localize sounds in space has evolved over the years within a multisensory environment. Vision can provide direct information about the auditory target, by revealing the position of the sound source in the environment (e.g., the listener hears and sees the bird tweeting on the tree). Vision can provide indirect information about the auditory targets, by revealing from which sector of space they may originate or by providing general information about the environmental spatial frame for encoding sound position (e.g., the listener cannot see the bird tweeting, but perceives the tree branches from which the stimulus originates)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call