Abstract
The performance of auditory localization in audiovisual environments is affected by both the complexity and congruence of the auditory and visual sensory inputs. Subjects were asked to identify the perceived location of a virtual acoustic source placed at varying positions in the horizontal plane and rendered by either conventional stereophonic reproduction or through wavefront reconstruction by a loudspeaker array. The virtual source was auralized in simulated reflective and anechoic acoustic environments both with no visual stimulus and with projected imagery of varying congruence. The localization precision of subjects presented with complex aural stimuli in the form of simulated reflective environments is reduced relative to anechoic conditions. Additionally, array-based wavefront reconstruction provides a significant increase in overall localization performance over stereophonic rendering, particularly for subjects positioned off the primary axis of the loudspeaker array. Subjects presented with simultaneous aural and visual inputs are able to accurately locate the virtual sound source when a large incongruence angle exists (typically more than 30 degrees). However, in the presence of a smaller angular separation between stimuli, a bias towards the visual stimulus is detectable. The results help determine the requirements for accuracy in auralization and reproduction in the creation of virtual multimedia environments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have