Abstract

Interaural time, level, and spectral differences are the major cues being used for sound source localization in the horizontal plane. Past studies have shown that human subjects could be trained to use monaural spectral cues to localize the sources of sound in the azimuth plane, but performance is poor. The current study investigates whether the combination of two monaural signals at both ears, one after another in time, could benefit sound source localization accuracy. The purpose is to study the possibility that the human subjects can compare a monaural signal at one ear to the short term memory of a prior monaural signal arriving at the other ear. Subjects were asked to judge the position of a loudspeaker that presented a 250-ms, 40 dBA noise burst with a roving spectral contour in a quarter field. The rms sound source localization error was measured in three conditions: (1) a single monaural signal; (2) two consecutive monaural “looks” at identical signals, with an interval of 3 s and (3) normal binaural hearing. It was found that the “two looks” localization performance was better than that in the case of monaural presentation, but is still inferior to that of the binaural presentation. Similar experiments were carried out over headphones. Lateralization performance was compared to the sound-field localization data. Contributions of level and spectral differences will be discussed. [Research supported by the AFOSR.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call