Abstract

To program a goal-directed response in the presence of multiple sounds, the audiomotor system should separate the sound sources. The authors examined whether the brain can segregate synchronous broadband sounds in the midsagittal plane, using amplitude modulations as an acoustic discrimination cue. To succeed in this task, the brain has to use pinna-induced spectral-shape cues and temporal envelope information. The authors tested spatial segregation performance in the midsagittal plane in two paradigms in which human listeners were required to localize, or distinguish, a target amplitude-modulated broadband sound when a non-modulated broadband distractor was played simultaneously at another location. The level difference between the amplitude-modulated and distractor stimuli was systematically varied, as well as the modulation frequency of the target sound. The authors found that participants were unable to segregate, or localize, the synchronous sounds. Instead, they invariably responded toward a level-weighted average of both sound locations, irrespective of the modulation frequency. An increased variance in the response distributions for double sounds of equal level was also observed, which cannot be accounted for by a segregation model, or by a probabilistic averaging model.

Highlights

  • Segregating sounds, and grouping them into perceptually distinct auditory objects, requires the brain to process distinct acoustic properties of a sound in parallel

  • It is extremely unlikely that multiple sources contain the exact same frequencies with identical onsets, offsets, and co-modulations, and this statistical fact can in principle be used as a prior to group sound features into distinct auditory objects (Bell and Sejnowski, 1995; Bregman, 1990; Darwin, 2008; Lee et al, 1998; Wang and Brown, 2006)

  • Broadband synchronous sounds presented in the midsagittal plane evoke a spatial percept that is determined by relative sound levels and spatial separation, rather than by task instructions

Read more

Summary

Introduction

Segregating sounds, and grouping them into perceptually distinct auditory objects, requires the brain to process distinct acoustic properties of a sound in parallel. Spatial hearing seems to play a minor role in sound segregation (Best et al, 2004; Bregman, 1990; Bremen and Middlebrooks, 2013; Schwartz et al, 2012); in the absence of non-spatial cues (such as harmonicity, or onset-disparity cues), it seems impossible to segregate sounds as different auditory objects in space Instead, both in the horizontal plane (the stereophonic effect: Bauer, 1961; Blauert, 1997; but see Yost and Brown, 2013) and in the midsagittal plane (Bremen et al, 2010), the perceived location of synchronous sounds is directed toward a level-weighted average (WA) of the source locations. For the latter, weighted averaging occurs even when the spectral-temporal modulations of the sound sources are unrelated

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call