Abstract

While much is known about how well listeners can locate single sound sources under ideal conditions, it remains unclear how this ability relates to the more complex task of spatially analyzing realistic acoustic environments. There are many challenges in measuring spatial perception in realistic environments, including generating simulations that offer a level of experimental control, dealing with the presence of energetic and informational masking, and designing meaningful behavioral tasks. In this work we explored a new method to measure spatial perception in one realistic environment. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. Within this room, 96 different “scenes” were generated, comprising 1-6 concurrent talkers seated at different tables. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a touchscreen interface. Young listeners with normal hearing were able to reliably analyze scenes with up to four simultaneous talkers, while older listeners with hearing loss demonstrated errors even with two talkers at a time. Localization accuracy for detected talkers, as measured by this approach, was sensitive both to the complexity of the scene and to the listener’s degree of hearing loss.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.