Abstract

In crowded social settings, listeners often face the challenge of following a conversation in the presence of other conversations. Several factors influence the difficulty of this task, including the number of talkers, the amount of reverberation, and the hearing status. Beamformers in hearing aids have the potential to mitigate these factors by improving the signal-to-noise ratio, but their effectiveness in real-world settings has not yet been clearly demonstrated. Here, we used virtual reality to investigate the effect of head- and eye-steered beamformers on the ability of participants to analyze complex audio-visual scenes. The participants’ task was to find and locate an ongoing story in a mixture of other stories in scenes differing in terms of the number of concurrent talkers and the amount of reverberation. The talkers were distributed in the frontal hemisphere between ±105°. The primary outcome measure was the time taken to identify the location of the target talker. Preliminary results show shorter response times with beamforming (in comparison to an omnidirectional setting), especially when more talkers were present. This framework provides a new means for examining the effects of hearing technologies on behavior in complex audio-visual scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call