Abstract

The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically “dry” stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual). No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE) or the slope of psychometric functions (β) across all three acoustic conditions. Additionally, both the PSE and β did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.

Highlights

  • Various experiments have sought to determine the nature of the spatiotemporal integration window for audio-visual motion [1,2,3]

  • point of subjective equivalence (PSE) for experiment 2 are shown in Figure 4A (BRIR 5u), plotted alongside PSEs corresponding to the anechoic condition of Experiment 2: Reverberant Auditory Motion

  • Concluding Remarks The current study explored the effects of spatial quantization and reverberation on auditory motion perception

Read more

Summary

Introduction

Various experiments have sought to determine the nature of the spatiotemporal integration window for audio-visual motion [1,2,3] To probe this question, studies typically deliver moving auditory stimuli using an array of sequentially activated speakers in freefield [4,5,6,7], or over headphones by measuring Head Related Impulse Responses (HRIRs) and rendering a Virtual Auditory Space (VAS) [8]. The percept of motion is usually created by sequentially activating discrete stationary sound-sources Whether these are physical speakers placed in free field arrays or stimuli rendered in VAS via the measurement of HRIRs (see methods), the changes in acoustical cues are quantized, resulting in a loss of spatial information. As suggested in Grantham [13] and confirmed in Carlile and Best [14] and Freeman et al

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.