Abstract

In naturalistic auditory scenes, relevant information is rarely concentrated at a single location, but rather unpredictably scattered in- and out-field-of-view (in-/out-FOV). Although the parsing of a complex auditory scene is a fairly simple job for a healthy human auditory system, the uncertainty represents a major issue in the development of effective hearing aid (HA) processing strategies. Whereas traditional omnidirectional microphones (OM) amplify the complete auditory scene without enhancing signal-to-noise-ratio (SNR) between in- and out-FOV streams, directional microphones (DM) may greatly increase SNR at the cost of preventing HA users to perceive out-FOV information. The present study compares the conventional OM and DM HA settings to a split processing (SP) scheme differentiating between in- and out-FOV processing. We recorded electroencephalographic data of ten young, normal-hearing listeners who solved a cocktail-party-scenario-paradigm with continuous auditory streams and analyzed neural tracking of speech with a stimulus reconstruction (SR) approach. While results for all settings exhibited significantly higher SR accuracies for attended in-FOV than unattended out-FOV streams, there were distinct differences between settings. In-FOV SR performance was dominated by DM and SP and out-FOV SR accuracies were significantly higher for SP compared to OM and DM. Our results demonstrate the potential of a SP approach to combine the advantages of traditional OM and DM settings without introduction of significant compromises.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call