Abstract

Although most sound localization research has examined the ability of listeners to determine the location of a single sound presented in a quiet (typically anechoic) environment, most real‐world listening situations are more complex, with multiple simultaneous sounds. Here, an initial experiment, designed to examine localization in multisource environments, is described. Listeners judged the location of a target signal (speech or environmental sound, presented normally or time‐reversed) masked by up to four simultaneous sounds. In each block of trials, the observation interval was either preceded by, or followed by, a cueing interval, during which the stimulus to be localized was identified. It was expected that these two approaches would lead to different answers, as the associated tasks presumably address different listening strategies (i.e., analytic listening versus monitoring). The results indicate that, in all conditions, localization errors increase as the number of simultaneous sources increases. Moreover, performance degrades more rapidly in the post‐cue condition than in the pre‐cue condition. Surprisingly, this difference occurs for as few as two simultaneous sources, suggesting that there is a substantial cost when listeners are asked to remember what sounds were present and where those sounds were located in a complex auditory environment. [Work supported by AFOSR.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call