Abstract

The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors’ spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.

Highlights

  • Might already be able to devote working memory capacity for the establishment and storing of novel representations based on semantics in complex, noisy environments, children with Hearing loss (HL) might have to use most of their working memory capacity to predict words based on phonology and existing lexical abilities of the novel and already known representations

  • For the normal hearing (NH) group, the repeated measures analysis of variance (rmANOVA) of speech reception threshold (SRT) at 50% accuracy revealed a significant main effect of RT, F(1,2) = 12.81, p = 0.02, ηp 2 = 0.72

  • When averaged across RTs, the comprehension benefit from the 90◦ spatial separation was significantly larger in the same pitch condition (M = 5.64 dB) than in the different pitch condition (M = 2.91 dB), t(5) = 3.48, p = 0.02, d = 1.42

Read more

Summary

The Influence of Hearing Loss and Auditory Noise on Development

Hearing plays a crucial role in children’s development when learning through verbal communication. The current clinical fitting of HAs focuses primarily on ensuring audibility in quiet environments [9,10] It does not, directly address how children listen in complex acoustic environments, such as classrooms. While auditory spatial cues enable speech stream segregation in children with NH [14,15], the lack of reliable access to spatial hearing through hearing devices presents the biggest challenge to date for children fitted with bilateral HAs (i.e., one device in each ear). It has been shown that children with HAs achieved a spatial release from masking similar to those of children with NH when speech and background noise emanated from a frontal source, but performed poorly when a spatial separation from background noise and target was introduced [17]. For children with HAs, cognitive and language abilities appear to strongly influence their level of SIN comprehension [17,18]

The Ease of Language Understanding Model
Behavioral Speech-In-Noise Comprehension Assessments
Method
Speech Comprehension and Virtual Acoustic Reality
Speech Comprehension and Functional Near-Infrared Spectroscopy
A Novel Approach to Elucidate SIN Comprehension: A VAE-fNIRS Application
Section 2.4.2
Equipment and Virtual Acoustic Environment
Experimental Design and Procedure
Behavioral Data
Neural Data
Analyses
Discussion
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call