Abstract

There is widespread research and clinical interest in quantifying how the acoustics of real-world environments, such as background noise and reverberation, impede a listener’s ability to recognize speech. Conventional methods used to quantify these effects include dichotic listening via headphones in sound-attenuated booths or loudspeakers in anechoic or low-reverberant environments, which lack the capability of manipulating room acoustics. Using a state-of-the-art Variable Room Acoustics System housed in a virtual sound room (ViSoR), this study aims to systematically assess the effects of non-individual head-related transfer functions (HRTFs) and mismatched visual perception on speech recognition in virtual acoustic environments. Young adults listened to and repeated sentences presented amidst a co-located two-talker speech competitor with reverberation times ranging from 0.4 to 1.25 s. Sentences were presented in three listening conditions: through a loudspeaker array in ViSoR with the participants’ own HRTFs (Condition 1); via headphones in a sound-attenuated booth with non-individual HRTFs(Condition 2); and using the same binaural reproduction as Condition 2 in ViSoR (Condition 3). Condition 3 serves as a control condition, allowing us to quantify the separate effects of non-individual HRTFs and visual mismatch on speech recognition. Discussion will address the validity and use of virtual acoustics in research and clinical settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call