Abstract

The front–back sound localization performance of human subjects was investigated in 3 virtual rooms with different acoustic characteristics, and in an anechoic chamber. The three chosen rectangular rooms had the same width–length–height ratio. Their size and reverberation time were systematically varied in order to disentangle the respective effects on the localization cues, perceived during listening tests. The sound absorption was distributed equally over all surfaces. The head related transfer function (HRTF) used in simulation of the receiver was based on measurements on an artificial head. Four stimuli, with different spectra and time domain structure, were presented to the listeners: broadband noise, orchestra legato sound, orchestra staccato sound, and noise containing two one-third octave band components around 0.5 and 3.15kHz. Significant differences in localization performance were found between sounds presented in the smallest room and the large rooms, and between the anechoic room and the two large rooms. People’s localization performance was significantly different between staccato and legato sound stimuli, and it was significantly worse for the noise containing two one-third octave band components compared to the other stimuli. Also learning effects were observed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call