Published in last 50 years
Articles published on Speech Perception In Noise
- New
- Research Article
- 10.1080/08856257.2025.2577923
- Nov 8, 2025
- European Journal of Special Needs Education
- Kiri Mealings + 2 more
ABSTRACT Understanding how the classroom environment affects autistic children’s listening, learning, and wellbeing is vital to ensure that the space is conducive to these outcomes. This paper used the Listen to Learn for Life Assessment Framework to conduct a scoping review following the PRISMA-ScR protocol on what is known and not known about how the classroom environment affects listening, learning, and wellbeing of school-aged autistic children. Thirty-five of 1,301 papers returned in the searches from five databases met the criteria to be included in the review. The results revealed that autistic children have poorer speech perception in noise than their neurotypical peers, but assistive listening devices such as IQbuds, FM systems, and sound field amplification as well as visual aids benefit these children. Noise should still be controlled, however, to help minimise repetitive behaviours, via carefully selecting the school location, choosing quieter air-conditioning systems, and installing sound absorption. Separate quiet spaces and noise-cancelling headphones for independent work are beneficial. Classrooms should have clearly defined spaces for different activities and limit visual clutter as these can be triggers for inattention and stress. Areas for future research based on components of the Listen to Learn for Life Assessment Framework not investigated are discussed.
- New
- Research Article
- 10.32598/sjrm.14.5.3326
- Nov 1, 2025
- Scientific Journal of Rehabilitation Medicine
- Atie Bavandi + 1 more
Background and Aims Various aspects and characteristics of hearing aid performance can be enhanced using music. Binaural hearing also plays a crucial role in speech perception in the presence of noise. This study used music, tests, and a questionnaire related to speech perception in noise to investigate the effect of listening to music on speech perception in the presence of noise in bilateral and unilateral hearing aid users. Methods This study included 40 bilateral and unilateral hearing aid users aged 25-55 years (23 women, 17 men). The quick speech-in-noise (Q-SIN) test and speech, spatial, and qualities of hearing scale questionnaire (SSQ) were used to investigate the effects of music on speech perception in the presence of noise. Results The results of the Q-SIN test in both groups showed that music reduced signal-to-noise ratio (SNR) 50 and signal-to-noise ratio (SNR) loss, which was more pronounced in the bilateral hearing aid users (P<0.05). The results of the SSQ also showed that the questionnaire score increased after the three-month music therapy. This increase in score was greater in bilateral hearing aid users (P<0.05). Conclusion Regular listening to music on a program basis improves speech perception in noise. The results of the present study showed that bilateral hearing aids had a greater effect on improving speech perception in noise than unilateral hearing aids.
- New
- Research Article
- 10.4274/jarem.galenos.2025.69783
- Oct 27, 2025
- Journal of Academic Research in Medicine
- Gökhan Yaz + 1 more
Comparison of the Effects of Asymmetrical directionality and Narrow directionality on Speech Perception in Noise in Hearing Aids
- New
- Research Article
- 10.1097/aud.0000000000001724
- Oct 23, 2025
- Ear and hearing
- Tine Arras + 7 more
Children with prelingual single-sided deafness (SSD) have difficulty understanding speech in noise and sound localization. They also have an increased risk of problems with their language and cognitive development. Moreover, untreated SSD can lead to cortical reorganization, that is, the aural preference syndrome. Providing these children with a cochlear implant (CI) at an early age may support improved outcomes across multiple domains. This longitudinal study aimed to identify those aspects of development that are especially at risk in children with SSD, and to determine whether early cochlear implantation affects the children's developmental outcomes. Over the past decade, 37 children with SSD completed regular auditory, language, cognitive, and balance assessments. Twenty of these children received a CI before the age of 2.5 yr. The same developmental outcomes were assessed in 33 children with bilateral normal hearing who served as a control group. The present study describes spatial hearing, cognitive, and postural balance development outcomes. These were assessed using standardized tests for speech perception in noise (speech reception threshold in three spatial conditions), sound localization (mean localization error in a nine-loudspeaker set-up), cognitive skills (Wechsler Preschool and Primary Scale of Intelligence), balance (Bruininks-Oseretsky Test of Motor Proficiency), and preoperative cervical vestibular evoked myogenic potentials. Longitudinal analysis showed that the children with SSD who did not receive a CI were at risk for poorer speech perception in noise, sound localization, and verbal intelligence quotient. On average, they had higher speech perception thresholds (1.6 to 16.8 dB, depending on the spatial condition), larger localization errors (35.4°), and lower verbal intelligence quotient scores (difference of 0.78 standard deviations). Children with SSD with a CI performed on par with the normal hearing children on the cognitive tests. In addition, they outperformed their nonimplanted peers with SSD on tests for speech perception in noise (up to 11.1 dB lower mean speech reception threshold, depending on spatial condition) and sound localization (9.5° smaller mean error). The children with SSD, with and without a CI achieved similar scores on behavioral tasks for postural balance. The present study shows that early cochlear implantation can improve spatial hearing outcomes and facilitate typical neurocognitive development in children with prelingual SSD. Taken together with previously published data related to children's language development, the present results confirm that children with prelingual SSD can benefit from a CI provided at an early age to support their development across multiple domains. Several guidelines are suggested regarding the clinical follow-up and rehabilitation of these children.
- New
- Research Article
- 10.1002/brb3.70924
- Oct 21, 2025
- Brain and Behavior
- Nazife Öztürk Özdeş + 1 more
ABSTRACTIntroductionMisophonia is a condition characterized by intense emotional reactions, such as anger, anxiety, or disgust, in response to specific sounds. This study aims to investigate the speech perception performance in noise of individuals with misophonia. Recent perspectives suggest that these emotional reactions may interfere with auditory attention, particularly in socially relevant listening situations. However, little is known about how misophonia affects speech perception in noisy environments.MethodsThe study included 40 individuals with misophonia and 40 healthy controls, matched for age and gender. Both groups were administered the Hearing in Noise Test (HINT) under two different scenarios: one with speech noise only and another with speech noise combined with the triggering sound of a buzzing fly. The fly sound was identified as aversive by all participants with misophonia. Speech perception performances in noise of the groups across the two scenarios were compared.ResultsThe findings revealed that the presence of a triggering sound significantly impaired the speech perception ability in noise in individuals with misophonia. The misophonia group demonstrated lower performance in the presence of the triggering sound compared to the control group. Additionally, increased severity of misophonia and a greater number of triggering sounds were associated with further declines in HINT performance.ConclusionThis study highlights that misophonia is a condition that adversely affects speech perception in noise. Understanding the communication challenges faced by individuals with misophonia in noisy environments provides a crucial foundation for the assessment of this disorder and the development of therapeutic interventions.
- New
- Research Article
- 10.3389/fauot.2025.1677482
- Oct 20, 2025
- Frontiers in Audiology and Otology
- Matthew B Fitzgerald + 8 more
IntroductionTraditional approaches to improving speech perception in noise (SPIN) for hearing-aid users have centered on directional microphones and remote wireless technologies. Recent advances in artificial intelligence and machine learning offer new opportunities for enhancing the signal-to-noise ratio (SNR) through adaptive signal processing. In this study, we evaluated the efficacy of a novel deep neural network (DNN)-based algorithm, commercially implemented as Edge Mode™, in improving SPIN outcomes for individuals with sensorineural hearing loss beyond that of conventional environmental classification approaches.MethodsThe algorithm was evaluated using (1) objective KEMAR-based performance in seven real-world scenarios, (2) aided and unaided speech-in-noise performance in 20 individuals with SNHL, and (3) real-world subjective ratings via ecological momentary assessment (EMA) in 20 individuals with SNHL.ResultsSignificant improvements in SPIN performance were observed on CNC+5, QuickSIN, and WIN, but not NST+5, likely due to the use of speech-shaped noise in the latter, suggesting the algorithm is optimized for multi-talker babble environments. SPIN gains were not predicted by unaided performance or degree of hearing loss, indicating individual variability in benefit, potentially due to differences in peripheral encoding or cognitive function. Furthermore, subjective EMA responses mirrored these improvements, supporting real-world utility.DiscussionThese findings demonstrate that DNN-based signal processing can meaningfully enhance speech understanding in complex listening environments, underscoring the potential of AI-powered features in modern hearing aids and highlighting the need for more personalized fitting strategies.
- Research Article
- 10.1038/s41598-025-18800-6
- Oct 7, 2025
- Scientific Reports
- Ester Benzaquén + 5 more
Problems understanding speech-in-noise (SIN) are commonly associated with peripheral hearing loss. But pure tone audiometry (PTA) alone fails to fully explain SIN ability. This is because SIN perception is based on complex interactions between peripheral hearing, central auditory processing (CAP) and other cognitive abilities. We assessed interaction between these factors and age using a multivariate approach that allows the modelling of directional effects on theoretical constructs: structural equation modelling. We created a model to explain SIN using latent constructs for sound segregation, auditory (working) memory, and SIN perception, as well as PTA, age, and measures of non-verbal reasoning. In a sample of 207 participants aged 18–81 years old, age was the biggest determinant of SIN ability, followed by auditory memory. PTA did not contribute to SIN directly, although it modified sound segregation ability, which covaried with auditory memory. A second model, using a CAP latent structure formed by measures of sound segregation, auditory memory, and temporal processing, revealed CAP to be the largest determinant of SIN ahead of age. Furthermore, we demonstrated the impact of PTA and non-verbal reasoning on SIN are mediated by their influence on CAP. Our results highlight the importance of central auditory processing in speech-in-noise perception.
- Research Article
- 10.1044/2025_jslhr-24-00879
- Oct 3, 2025
- Journal of speech, language, and hearing research : JSLHR
- Arden Ricciardone + 3 more
Perceiving nonnative-accented speech is a cognitively demanding task that requires additional cognitive effort compared to perceiving native-accented speech. People who have experienced a mild traumatic brain injury (mTBI; also commonly referred to as concussion) report impairments in an overlapping set of cognitive capacities, leading to the prediction that the perception of nonnative-accented speech may be even more difficult than it would be for someone without a history of brain injury. Of interest is whether people who have suffered an mTBI find nonnative-accented speech less intelligible and whether they report experiencing more cognitive symptoms than controls when perceiving nonnative-accented speech. Adults with a positive history of concussion (n = 52) and without a history of concussion (n = 69) completed a speech perception in noise (SPIN) task varying in talker accent and signal-to-noise ratio level. To assess the perceived demand of this task and its influence on concussion-related symptoms, participants rated various cognitive symptom levels throughout the task. Findings from this study show that, compared to healthy controls, those with a history of concussion may be differentially affected in their experience completing a SPIN task with a nonnative-accented talker. More strikingly, those with a history of mTBI presented significant differences in irritability, and somewhat reduced levels of energy and increased headache levels, when listening to speech in challenging conditions compared to individuals who have never had a brain injury. Individuals who have had a concussion in the past may experience mild impairments in perception of nonnative-accented speech in noise. Additionally, challenging listening conditions may exacerbate existing symptoms associated with mTBI. https://doi.org/10.23641/asha.30234979.
- Research Article
- 10.1162/jocn.a.2402
- Oct 3, 2025
- Journal of cognitive neuroscience
- Valeriya Tolkacheva + 3 more
Although listeners can enhance perception by using prior knowledge to predict the content of degraded speech signals, this process can also elicit "misperceptions." The neurobiological mechanisms responsible for these phenomena remain a topic of debate. There is relatively consistent evidence for involvement of the bilateral posterior superior temporal gyri (pSTG) in speech perception in noise; however, a role for the left premotor cortex (PMC) is debated. In this study, we employed transcranial magnetic stimulation (TMS) and a prime-probe paradigm for the first time to investigate causal roles for the left PMC and pSTG in speech perception and misperception. To produce misperceptions, we created partially mismatched pseudosentence probes via homophonic nonword transformations (e.g., She moved into her apartment soon after signing the lease-Che moffed inso har apachment sool amter siphing tha leals). All probe sentences were then spectrotemporally degraded and preceded by a clear prime sentence. Compared with a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudosentences. However, inhibitory stimulation of the left PMC did not significantly influence perception of either real sentences or misperceptions of pseudosentences. These results confirm a role for the left pSTG in the perception of degraded speech. However, they do not support a role for the left PMC in either lexical or sublexical processing during perception of degraded speech using ecologically valid sentence stimuli. We discuss the implications of these findings for neurobiological models of speech perception.
- Research Article
- 10.3390/audiolres15050129
- Oct 2, 2025
- Audiology Research
- Chrisanda Marie Sanchez + 4 more
Background/Objectives: Spanish-speaking patients face persistent barriers in accessing equitable audiological care, particularly when standardized language-appropriate tools are lacking. Two Spanish-language sentence recognition tests, the Spanish AzBio Sentence (SAzB) and the Latin American Hearing in Noise Test (LAH), are commonly used to evaluate speech perception in adults with hearing loss. However, performance differences between these measures may influence referral decisions for hearing intervention, such as cochlear implantation. This study compared test performance under varying noise and spatial conditions to guide appropriate test selection and reduce the risk of misclassification that may contribute to healthcare disparities. Methods: Twenty-one bilingual Spanish/English speaking adults with normal bilateral hearing completed speech perception testing using both the SAzB and LAH. Testing was conducted under two spatial configurations: (1) speech and noise presented from the front (0° azimuth) and (2) speech to the simulated poorer ear and noise to the better ear (90°/270° azimuth). Conditions included quiet and three signal-to-noise ratios (+10, +5, and 0 dB). Analyses included paired t-tests and one-way ANOVAs. Results: Participants scored significantly higher on the LAH than on the SAzB across all SNR conditions and configurations, with ceiling effects observed for the LAH. SAzB scores varied by language dominance, while LAH scores did not. No other differences were observed based on any further demographic information. Conclusions: The SAzB provides a more challenging and informative assessment of speech perception in noise. Relying on easier tests like the LAH may obscure real-world difficulties and delay appropriate referrals for hearing loss intervention, including cochlear implant evaluation. Selecting the most appropriate test is critical to avoiding under-referral and ensuring Spanish-speaking patients receive equitable and accurate care.
- Research Article
- 10.1016/j.heares.2025.109445
- Oct 1, 2025
- Hearing research
- Andreas Büchner + 7 more
Clinical improvement of speech perception in noise with Automatic Sound Management 3.0.
- Research Article
- 10.1121/10.0039627
- Oct 1, 2025
- JASA express letters
- Sarah Knight + 2 more
Spatially separating target and masker talkers improves speech perception in noise, an effect known as spatial release from masking (SRM). Independently, the perceived location of a sound can erroneously shift towards an associated but spatially displaced visual stimulus (the "ventriloquist effect"). This study investigated whether SRM can be induced by spatially separating visual stimuli associated with a target and masker without separating the sound sources themselves. Results showed that SRM was not induced by spatially separated visual stimuli, but collocated visual stimuli reduced the benefit of auditory SRM. There was no influence of individual differences in auditory localization ability on effects related to the visual stimuli.
- Research Article
- 10.1016/j.heares.2025.109367
- Oct 1, 2025
- Hearing research
- David Ratelle + 1 more
Neural tracking of continuous speech in adverse acoustic conditions among healthy adults with normal hearing and hearing loss: A systematic review.
- Research Article
- 10.1121/10.0039576
- Oct 1, 2025
- The Journal of the Acoustical Society of America
- Sara Momtaz + 8 more
The impacts of reverberation and binaural sensitivity on spatial release from masking (SRM) were assessed in an adaptive speech-in-speech recognition task with open-set sentence targets presented in continuous narrative speech maskers. Forty young adults with normal hearing (ages 19-55 years old) completed the speech task. Under anechoic conditions, target speech and two masker streams were presented from a single frontal loudspeaker (co-located) or three loudspeakers separated by 45° (separated). In simulated reverberant conditions, reflections were calculated via a modified source image model (10 m × 10 m; 0.4 s reverberation time) and presented throughout 360° of azimuth. Interaural time difference (ITD) and interaural level difference (ILD) discrimination thresholds were separately measured over headphones using a speech-based two-alternative forced choice task. Reverberation elevated speech reception thresholds (SRTs) and reduced SRM by 5.4 dB on average. Individual ITD thresholds were significantly associated with separated SRT but not with SRM-likely as a result of intercorrelation between co-located and separated scores. The overall results corroborate prior reports of roughly 5 dB SRM reduction in reverberation, and also provide evidence for individual ITD sensitivity as a predictor of spatial speech perception in noise, which may be obscured in analyses that focus only on difference scores such as SRM.
- Research Article
- 10.1121/10.0039635
- Oct 1, 2025
- The Journal of the Acoustical Society of America
- Zhe-Chen Guo + 1 more
Extended high frequencies (EHFs; above 8 kHz) improve speech perception in noise, but the underlying mechanisms remain unclear. Debate continues over whether the benefit arises from EHFs providing direct cues to phonemes or more indirectly reflects the listener's cochlear health. To examine whether and how EHFs contribute to phoneme recognition-which is difficult to test in humans given the wide variability in EHF thresholds-this study leveraged an acoustic automatic speech recognition (ASR) model. English speech from the VCTK corpus was resynthesized to create spatial audio where target speech was masked by an interfering talker separated by 20°, 45°, 80°, or 120° azimuth at target-to-masker ratios (TMRs) from +3 to -12 dB. A convolutional neural network bi-directional long short-term memory model was trained to decode target phonemes from cochleagrams of broadband or low-pass filtered (e.g., 8 kHz cutoff) speech. In masked conditions, EHFs improved phoneme recognition across all spatial separations, particularly at TMRs ≤ -9 dB. The improvement was not found in quiet. Removing EHFs disproportionately increased phoneme error rates for consonants, consistent with consonants' spectral concentration at higher frequencies. These findings indicate that EHFs contribute directly to phoneme recognition in adverse conditions, supporting their inclusion in clinical audiometry and ASR system development.
- Research Article
- 10.1044/2025_lshss-25-00053
- Sep 15, 2025
- Language, speech, and hearing services in schools
- Fatma Yurdakul Çınar + 1 more
Understanding children's speech perception strategies in noise is very important for improving their living environment. Previous studies with adults reported that closing the eyes improves speech understanding in noise by increasing the activation of cortical systems involved in listening and attention, while increased cognitive load makes speech understanding in noise more difficult. This study aimed to investigate the effects of listening conditions on speech perception in noise in children. The study recruited 102 typically developing children, 51 girls and 51 boys, aged between 7 and 12 years with typical hearing. Speech intelligibility tests were performed in noise under three different conditions: eyes open (EO), eyes closed (EC), and watching cartoon (WC), which is assumed to increase cognitive load. All conditions were applied randomly (without any order in the conditions) to each participant. In the speech intelligibility test in noise, the lowest signal-to-noise ratio (the best performance) was obtained in the EO, EC, and WC conditions, respectively. When EO-EC, EO-WC, and EC-WC were compared by the post hoc analysis, the largest effect size was obtained in EO-WC, EO-EC, and EC-WC, respectively. When evaluated in terms of genders, no statistically significant difference was found for the three listening conditions. It has been shown that children's speech perception abilities in noise are affected at different levels by various factors such as open-closed eyes, auditory attention, and cognitive load. The best speech perception performance in noise was obtained in the EO condition, which is the natural situation.
- Research Article
- 10.3390/audiolres15050113
- Sep 8, 2025
- Audiology Research
- Annie Moulin + 2 more
Background/Objectives: Potential correlations between the scores of self-report questionnaires and speech perception in noise abilities vary widely among studies and have been little explored in patients with conventional hearing aids (HAs). This study aimed to analyse the interrelations between (1) self-report auditory scales (the 15-item short-form of the Speech Spatial and Qualities of Hearing Scale (15iSSQ) and the Extended Listening Effort Assessment Scale (EEAS); (2) speech perception in cocktail party noise, measured with and without HAs; and (3) a self-assessment of the listening effort perceived during the speech in a noise-perception task (TLE) in hearing-aid wearers. Material and Methods: –Thirty-two patients, aged of 77.5 years (SD = 12) with a mean HA experience of 5.6 years, completed the 15iSSQ and EEAS. Their speech-in-babble-noise perception thresholds (SPIN) were assessed with (HA_SPIN) and without their HAs (UA_SPIN), using a four-alternative forced-choice test in free field, with several fixed Signal to Noise ratios (SNR). They were asked to self-assess their listening effort at each of those SNRs, allowing us to define a task-related listening-effort threshold with (HA_TLE) and without HAs (UA_TLE), i.e., the SNR for which they self-evaluated their listening effort as 5 out of 10. Results: 15iSSQ decreased as both HA_SPIN (r = −0.47, p < 0.01) and HA_TLE increased (r = −0.36, p < 0.05). The relationship between 15iSSQSpeech and UA_SPIN (and UA_TLE) showed a strong moderating influence by HA experience and HA daily wear (HADW), explaining up to 31% of the variance. 15iSSQQuality depended on HA SPIN and HA_TLE (r = −0.50, p < 0.01), and the relationship between 15iSSQQuality and UA_TLE was moderated by HADW. EEAS scores depended on both HA experience and UA_SPIN, with a strong moderating influence by HADW. Conclusions: Relationships between auditory questionnaires and SPIN are strongly moderated by both HA experience and HADW, even in experienced HA users, showing the need to account for these variables when analysing relationships between questionnaires and hearing-in-noise tests in experienced HA wearers.
- Research Article
- 10.1016/j.heares.2025.109345
- Sep 1, 2025
- Hearing research
- Anadel Khalaila-Zbidat + 1 more
Neural and perceptual speech in noise processing among 6-8-year-old children: Relation to working memory.
- Research Article
- 10.1016/j.scog.2025.100362
- Sep 1, 2025
- Schizophrenia research. Cognition
- Lei Liu + 7 more
Impaired non-verbal auditory memory maintenance in schizophrenia: An ERP study.
- Research Article
- 10.1002/lary.70058
- Aug 22, 2025
- The Laryngoscope
- Halime Sümeyra Sevmez + 1 more
This study aims to evaluate the effects of tinnitus on extended high-frequency (EHF) hearing thresholds, temporal fine structure (TFS) sensitivity, speech perception in noise (SPiN), and cognitive functions in individuals with normal hearing thresholds. Additionally, it aims to investigate the effects of tinnitus on central auditory mechanisms and cognitive functions by controlling for the influence of EHF hearing loss. A total of 40 participants (19 tinnitus patients with normal hearing and 21 controls) were assessed. TFS sensitivity, SPiN, cognitive performance, and EHF hearing thresholds were evaluated using the TFS-AF test, Turkish Matrix Test, RAVLT (Rey Auditory Verbal Learning Test), and audiometry, respectively. The tinnitus group showed significantly reduced TFS sensitivity (p = 0.043), poorer SPiN performance (p = 0.026), and lower RAVLT mean (p = 0.008), RAVLT 6 (p = 0.048), and RAVLT 7 (p = 0.001) scores compared to controls. EHF thresholds were higher in the tinnitus group (p = 0.032) and moderately negatively correlated with TFS sensitivity (r = -0.32, p = 0.042). TFS sensitivity remained linked to SPiN performance after controlling for EHF hearing loss (p = 0.048). The results revealed that tinnitus patients were associated with higher EHF hearing thresholds, reduced TFS sensitivity, poorer SPiN performance, and difficulties in cognitive functions such as learning and memory. Higher EHF hearing thresholds appeared to be an important factor in the diminishing of these abilities. Furthermore, when the effect of EHF hearing thresholds was excluded, it was observed that reduced SPiN performance in tinnitus patients was associated with TFS sensitivity.