Published in last 50 years
Articles published on Speech In Noise
- New
- Research Article
- 10.1044/2025_jslhr-24-00330
- Nov 7, 2025
- Journal of speech, language, and hearing research : JSLHR
- Harvey Dillon + 4 more
This research was carried out to create a new realistic speech-in-noise test designed to be sensitive to several causes of difficulty understanding speech in noise. The test, conducted under headphones, simulates listening in a typically reverberant classroom. It comprises a frontal target talker speaking high-context sentences and six competing talkers at different apparent locations. The first experiment initially measured the degree of context in the sentences by presenting them in writing, with one or two words missing, to adult participants who were asked to guess the missing word(s). The 48 highest context sentences were then presented to young adults through headphones, with the competing speech, to measure the relative intelligibility of every morpheme in each sentence. The levels of each morpheme were then adjusted to minimize intelligibility differences between morphemes. In the second and main experiment, the final version of the test was presented to 103 adults and 77 children (aged 6-12 years) to create normative data for an Australian-accented version of the test. In children, mean speech reception threshold in noise (SRTn) improved at a rate of 0.5 dB per year, down to -12.1 dB at 12 years of age. The regression line suggests that performance reaches that of young adults (with SRTn = -13.3 dB) at 14 years of age. The Test of Listening Difficulties-Universal (ToLD-U) appears to be suitable for assessing speech understanding in both children and adults under realistic, challenging listening conditions. It is the first test designed to simultaneously realistically simulate real-world environments using sentence-level but morpheme intelligibility-equalized test stimuli recorded using a conversational style, with multiple competing talkers and reverberation; nonetheless, it was designed to be suitable for routine clinical use via headphone presentation. Studies evaluating the ToLD-U in a clinical setting are in progress. https://doi.org/10.23641/asha.30493694.
- New
- Research Article
- 10.1016/j.heares.2025.109415
- Nov 1, 2025
- Hearing research
- Liu Yang + 4 more
Functional characteristics of speech perception decline in healthy aging based on resting-state EEG-fNIRS.
- New
- Research Article
- 10.3390/audiolres15060145
- Oct 25, 2025
- Audiology Research
- Margarida Roque + 2 more
Background/Objectives: Shift work in healthcare professionals affects performance in high cognitive processing, especially in complex environments. However, the beneficial effects that working in complex environments may have on auditory–cognitive processing remain unknown. These professionals face increased challenges in decision-making due to factors such as noise exposure and sleep disturbances, which may lead to the development of enhanced auditory–cognitive resources. This study aims to investigate the associations between shift work and auditory–cognitive processing in middle-aged healthcare workers. Methods: Thirty middle-aged healthcare workers were equally allocated to a shift worker (SW) or a fixed-schedule worker (FSW) group. Performance on a cognitive test, and in pure-tone audiometry, speech in quiet and noise, and listening effort were used to explore whether correlations were specific to shift work. Results: Exploratory analyses indicated that shift workers tended to perform better in visuospatial/executive function, memory recall, memory index, orientation, and total MoCA score domains compared to fixed-schedule workers. In the SW group, hearing thresholds correlated with memory recall and memory index. In the FSW group, hearing thresholds correlated with orientation, memory index, and total MoCA score, while listening effort correlated with naming, and speech intelligibility in quiet correlated with total MoCA scores. Conclusions: These exploratory findings suggest that shift work may be linked to distinct auditory–cognitive patterns, with potential compensatory mechanisms in visuospatial/executive functions and memory among middle-aged healthcare workers. Larger, longitudinal studies are warranted to confirm whether these patterns reflect true adaptive mechanisms.
- New
- Research Article
- 10.1097/aud.0000000000001724
- Oct 23, 2025
- Ear and hearing
- Tine Arras + 7 more
Children with prelingual single-sided deafness (SSD) have difficulty understanding speech in noise and sound localization. They also have an increased risk of problems with their language and cognitive development. Moreover, untreated SSD can lead to cortical reorganization, that is, the aural preference syndrome. Providing these children with a cochlear implant (CI) at an early age may support improved outcomes across multiple domains. This longitudinal study aimed to identify those aspects of development that are especially at risk in children with SSD, and to determine whether early cochlear implantation affects the children's developmental outcomes. Over the past decade, 37 children with SSD completed regular auditory, language, cognitive, and balance assessments. Twenty of these children received a CI before the age of 2.5 yr. The same developmental outcomes were assessed in 33 children with bilateral normal hearing who served as a control group. The present study describes spatial hearing, cognitive, and postural balance development outcomes. These were assessed using standardized tests for speech perception in noise (speech reception threshold in three spatial conditions), sound localization (mean localization error in a nine-loudspeaker set-up), cognitive skills (Wechsler Preschool and Primary Scale of Intelligence), balance (Bruininks-Oseretsky Test of Motor Proficiency), and preoperative cervical vestibular evoked myogenic potentials. Longitudinal analysis showed that the children with SSD who did not receive a CI were at risk for poorer speech perception in noise, sound localization, and verbal intelligence quotient. On average, they had higher speech perception thresholds (1.6 to 16.8 dB, depending on the spatial condition), larger localization errors (35.4°), and lower verbal intelligence quotient scores (difference of 0.78 standard deviations). Children with SSD with a CI performed on par with the normal hearing children on the cognitive tests. In addition, they outperformed their nonimplanted peers with SSD on tests for speech perception in noise (up to 11.1 dB lower mean speech reception threshold, depending on spatial condition) and sound localization (9.5° smaller mean error). The children with SSD, with and without a CI achieved similar scores on behavioral tasks for postural balance. The present study shows that early cochlear implantation can improve spatial hearing outcomes and facilitate typical neurocognitive development in children with prelingual SSD. Taken together with previously published data related to children's language development, the present results confirm that children with prelingual SSD can benefit from a CI provided at an early age to support their development across multiple domains. Several guidelines are suggested regarding the clinical follow-up and rehabilitation of these children.
- New
- Research Article
- 10.1097/aud.0000000000001725
- Oct 23, 2025
- Ear and hearing
- Weijie Weng + 6 more
To assess the outcomes of cochlear implantation (CI) for the management of pediatric single-sided deafness (SSD) in an Australian tertiary pediatric center. We performed a retrospective review of data from the Western Australian childhood hearing implant program between 2014 and 2023. Patients with SSD age below 16 yr and undergoing unilateral CI at Perth Children's Hospital were included. Data collected included demographics, history, pre-CI assessment, language ability, Bamford-Kowal-Bench (BKB)-Sentence in Noise (SIN) results, CI usage hours, localization, and Speech, Spatial and Qualities of Hearing Scale scores. Results were compared with and without CI where available. There were 22 patients who underwent CI for SSD in the 10-yr period. A total of 54.5% were male and 45.5% were female. The average age at diagnosis was 4.2 yr (0.0 to 15.1, SD: 4.8). Six patients (27.3%) were classified as late-onset SSD and 16 (72.7%) as early SSD. The average time to implantation was 1.3 yr (0.1 to 3.9 yr, SD: 1.3). Eleven patients (50%) were classified as early CI and 11 patients as late CI. The average age at implantation was 5.5 yr (1.0 to 15.6 yr, SD: 4.7). The etiology of the SSD was unknown in 8 patients (36.7%). Three of our patients showed improvement in language ability 1 yr after implantation. No patients scored poorer compared with their pre-CI language assessment. Six of the 22 subjects (27.27%) underwent localization testing. There was no significant difference identified between CI and without CI for localization (20% versus 24%; p = 0.93). Thirteen of 22 (59.09%) completed BKB-speech-in-noise (SIN) testing. Five patients performed better in their BKB-SIN with their CI and 12 performed worse with their CI. Of these 5 patients, 3 were in the early-onset SSD group and 2 in the late-onset SSD group. Of the same 5 patients, 4 were classified as early CI and 1 was classified as late CI. The majority of patients who benefited with BKB-SIN (80%) and localization testing (75%) were classified as early CI, that is, within 1 yr of onset. The average SNR loss in the early SSD group was 2.91 (1.3 to 6) with CI and 2.84 (0.5 to 5.5) without CI (p = 0.462). For late-onset SSD, the average with CI was 3.72 (0.01 to 6) and 3.78 (0.1 to 7.7) without CI (p = 0.475). The overall average for all patients, with and without CI, was 3.28 versus 3.28. Eight patients (34.78%) completed the Speech, Spatial and Qualities of Hearing Scale. The average CI usage was 4 hr/d (0 to 12.9, SD: 3.7). Despite a well-established newborn hearing screening program and equitable access to specialized services, our patients have variable long-term outcomes. While several patients showed benefit in speech in noise, high usage rates, and improved language skills, the challenge remains in consistently predicting and rehabilitating a heterogenous population of pediatric SSD patients. Patients who were classified as early CI recipients performed better, regardless of the onset of SSD.
- New
- Research Article
- 10.1002/brb3.70924
- Oct 21, 2025
- Brain and Behavior
- Nazife Öztürk Özdeş + 1 more
ABSTRACTIntroductionMisophonia is a condition characterized by intense emotional reactions, such as anger, anxiety, or disgust, in response to specific sounds. This study aims to investigate the speech perception performance in noise of individuals with misophonia. Recent perspectives suggest that these emotional reactions may interfere with auditory attention, particularly in socially relevant listening situations. However, little is known about how misophonia affects speech perception in noisy environments.MethodsThe study included 40 individuals with misophonia and 40 healthy controls, matched for age and gender. Both groups were administered the Hearing in Noise Test (HINT) under two different scenarios: one with speech noise only and another with speech noise combined with the triggering sound of a buzzing fly. The fly sound was identified as aversive by all participants with misophonia. Speech perception performances in noise of the groups across the two scenarios were compared.ResultsThe findings revealed that the presence of a triggering sound significantly impaired the speech perception ability in noise in individuals with misophonia. The misophonia group demonstrated lower performance in the presence of the triggering sound compared to the control group. Additionally, increased severity of misophonia and a greater number of triggering sounds were associated with further declines in HINT performance.ConclusionThis study highlights that misophonia is a condition that adversely affects speech perception in noise. Understanding the communication challenges faced by individuals with misophonia in noisy environments provides a crucial foundation for the assessment of this disorder and the development of therapeutic interventions.
- Research Article
- 10.1177/13623613251376484
- Oct 14, 2025
- Autism
- Jiayin Li + 4 more
Recognising speech in noise involves focusing on a target speaker while filtering out competing voices and sounds. Acoustic cues, such as vocal characteristics and spatial location, help differentiate between speakers. However, autistic individuals may process these cues differently, making it more challenging for them to perceive speech in such conditions. This study investigated how autistic individuals use acoustic cues to follow a target speaker and whether background music increases processing demands. Thirty-six autistic and 36 non-autistic participants, recruited in the United Kingdom, identified information from a target speaker while ignoring a competing speaker and background music. The competing speaker’s gender and location either matched or differed from the target. The autistic group exhibited lower mean accuracy across cue conditions, indicating general challenges in recognising speech in noise. Trial-level analyses revealed that while both groups showed accuracy improvements over time without acoustic cues, the autistic group demonstrated smaller gains, suggesting greater difficulty in tracking the target speaker without distinct acoustic features. Background music did not disproportionately affect autistic participants but had a greater impact on those with stronger local processing tendencies. Using a naturalistic paradigm mimicking real-life scenarios, this study provides insights into speech-in-noise processing in autism, informing strategies to support speech perception in complex environments. Lay abstract This study examined how autistic and non-autistic adults understand speech when other voices or music were playing in the background. Participants focused on one main speaker while another voice played simultaneously. Sometimes, the second voice differed from the main one in gender or where the sound was coming from. These differences made it easier to tell the voices apart and understand what the main speaker was saying. Both autistic and non-autistic participants did better when these differences were present. But autistic individuals struggled more when the two voices were the same gender and came from the same location. Background music also made it harder to understand speech for everyone, but it especially affected autistic participants who tended to focus more on small details. These findings help us understand how autistic individuals process speech in noisy environments and could lead to better ways to support communication.
- Research Article
- 10.1044/2025_jslhr-24-00443
- Oct 14, 2025
- Journal of speech, language, and hearing research : JSLHR
- Andreas Schroeer + 5 more
This study investigated the applicability of speech-induced binaural beats (SBBs), a phase modulation procedure that can be applied to arbitrary speech signals to generate cortical auditory evoked potentials (CAEPs) as an objective marker of binaural interaction when presented dichotically, in a free-field environment. Furthermore, the effect of speech-shaped masking noise on CAEPs was investigated. Nineteen normal-hearing participants listened to sentences from a sentence matrix test. Sentences were presented from two loudspeakers situated 1 m away to the left and right of the participant. Each sentence contained one SBB and was presented in silence and in three different variations of masking noise: (a) identical noise from the same loudspeakers as the speech signals, (b) modified/phase-modulated noise from the same loudspeakers as the speech signals, and (c) noise presented from a separate loudspeaker placed behind the participants. Additionally, five participants listened to the sentences without noise, with and without one ear occluded, to ascertain the possibility of acoustic interference. CAEPs were successfully recorded in all participants, in the no noise condition and all noise conditions. The presentation of noise from a separate loudspeaker significantly reduced the N1 amplitude. No CAEPs were recorded when one ear was occluded, indicating no contribution of acoustic interference. SBBs can be used to reliably evoke CAEPs as an objective marker of binaural interaction in the free field with masking noise. The advantage of this method is the use of speech material and the possible integration with existing behavioral tests for binaural interaction that utilize speech signals. https://doi.org/10.23641/asha.30063004.
- Research Article
- 10.1044/2025_jslhr-24-00862
- Oct 14, 2025
- Journal of speech, language, and hearing research : JSLHR
- Sarah E Yoho + 3 more
Here, we investigated how intelligibility is impacted in underappreciated, highly complex, but real-world communication scenarios involving two clinical populations-when the speaker has dysarthria and the listener has hearing loss, in noisy everyday environments. As a second aim, we examined the potential for modern noise reduction to mitigate the noise burden when listeners with hearing loss are attempting to understand a speaker with dysarthria. Thirteen adults with sensorineural hearing loss (SNHL) listened and transcribed dysarthric speech under three processing conditions: quiet, noise, and noise reduced. The intelligibility scores of listeners with SNHL were compared with previously reported data collected from adults without hearing loss (Borrie et al., 2023). Listeners with SNHL performed significantly poorer than typical-hearing listeners when listening to speech produced by a speaker with dysarthria-an intelligibility disadvantage that was exacerbated when background noise was present. However, it was also found that a time-frequency-based noise reduction technique was able to effectively restore the intelligibility of dysarthric speech in noise to approximate levels in quiet for listeners with hearing loss. The results highlight the substantial intelligibility burden placed upon a communication dyad consisting of a speaker with dysarthria and a listener with hearing loss, when background noise is present. Given the etiologies of dysarthria and hearing loss, and presence of noise in many everyday communication environments, this scenario is not uncommon. As such, these results are an important first step toward understanding the challenges experienced when communication disorders interact. The finding that noise reduction techniques can mitigate much of the noise burden provides a promising future direction for research that seeks to manage communication with two clinical populations.
- Research Article
- 10.1038/s41598-025-18800-6
- Oct 7, 2025
- Scientific Reports
- Ester Benzaquén + 5 more
Problems understanding speech-in-noise (SIN) are commonly associated with peripheral hearing loss. But pure tone audiometry (PTA) alone fails to fully explain SIN ability. This is because SIN perception is based on complex interactions between peripheral hearing, central auditory processing (CAP) and other cognitive abilities. We assessed interaction between these factors and age using a multivariate approach that allows the modelling of directional effects on theoretical constructs: structural equation modelling. We created a model to explain SIN using latent constructs for sound segregation, auditory (working) memory, and SIN perception, as well as PTA, age, and measures of non-verbal reasoning. In a sample of 207 participants aged 18–81 years old, age was the biggest determinant of SIN ability, followed by auditory memory. PTA did not contribute to SIN directly, although it modified sound segregation ability, which covaried with auditory memory. A second model, using a CAP latent structure formed by measures of sound segregation, auditory memory, and temporal processing, revealed CAP to be the largest determinant of SIN ahead of age. Furthermore, we demonstrated the impact of PTA and non-verbal reasoning on SIN are mediated by their influence on CAP. Our results highlight the importance of central auditory processing in speech-in-noise perception.
- Research Article
- 10.1044/2025_jslhr-24-00879
- Oct 3, 2025
- Journal of speech, language, and hearing research : JSLHR
- Arden Ricciardone + 3 more
Perceiving nonnative-accented speech is a cognitively demanding task that requires additional cognitive effort compared to perceiving native-accented speech. People who have experienced a mild traumatic brain injury (mTBI; also commonly referred to as concussion) report impairments in an overlapping set of cognitive capacities, leading to the prediction that the perception of nonnative-accented speech may be even more difficult than it would be for someone without a history of brain injury. Of interest is whether people who have suffered an mTBI find nonnative-accented speech less intelligible and whether they report experiencing more cognitive symptoms than controls when perceiving nonnative-accented speech. Adults with a positive history of concussion (n = 52) and without a history of concussion (n = 69) completed a speech perception in noise (SPIN) task varying in talker accent and signal-to-noise ratio level. To assess the perceived demand of this task and its influence on concussion-related symptoms, participants rated various cognitive symptom levels throughout the task. Findings from this study show that, compared to healthy controls, those with a history of concussion may be differentially affected in their experience completing a SPIN task with a nonnative-accented talker. More strikingly, those with a history of mTBI presented significant differences in irritability, and somewhat reduced levels of energy and increased headache levels, when listening to speech in challenging conditions compared to individuals who have never had a brain injury. Individuals who have had a concussion in the past may experience mild impairments in perception of nonnative-accented speech in noise. Additionally, challenging listening conditions may exacerbate existing symptoms associated with mTBI. https://doi.org/10.23641/asha.30234979.
- Research Article
- 10.1002/lio2.70273
- Oct 1, 2025
- Laryngoscope Investigative Otolaryngology
- Matthias Hey + 1 more
ABSTRACTObjectivesThe individual mapping of cochlear implants (CIs) aims to optimize the user's speech understanding. Recent investigations have shown the importance of soft speech: (1) According to Datalog studies, a large proportion of speech components lies in the range below 60 dB, and (2) soft speech represents a separate category in CI outcome, in addition to supra‐threshold speech and speech in noise. Soft‐speech understanding can be influenced by optimizing T‐values or by global parameters (loudness growth and TSPL in the Nucleus system). This study focussed on improving soft speech below 60 dB by optimizing loudness growth.MethodsSpeech understanding with varying loudness growth in the speech processor CP11 (Cochlear Ltd.) was compared in 20 experienced adult CI users. The mean soft‐speech score based on monosyllabic words at 40 and 50 dB was introduced for quantification.ResultsSix of the 20 patients studied showed significant individual improvement for soft speech when loudness growth was optimized, while none showed a significant decrease under quiet or noisy test conditions.ConclusionActual CI systems offer a broad loudness range of speech understanding. In addition to suprathreshold speech understanding, additional attention should be paid to soft speech, and the result should therefore be confirmed by speech audiometry at low levels.Levels of Evidence2.
- Research Article
- 10.1044/2025_jslhr-24-00742
- Sep 30, 2025
- Journal of speech, language, and hearing research : JSLHR
- Elena Giovanelli + 5 more
When listening to speech in noise, lipreading can facilitate communication. However, beyond its objective benefits, individuals' perceptions of lipreading advantages may influence their motivation to use it in daily interactions. We investigated to what extent older and younger adults are metacognitively aware of lipreading benefits, focusing not only on performance improvements but also on changes in confidence and listening effort and on the internal evaluations (confidence and effort) that shape listening experiences and may influence strategy adoption. Forty participants completed a hearing-in-noise task in virtual reality, facing a human-like avatar behind a translucent panel that varied in transparency to create pairs of conditions with different lip visibility. We measured audiovisual performance, confidence, and effort, deriving both real improvements (i.e., lipreading gain) and metacognitive improvements (i.e., perceived changes in accuracy, confidence, and effort) on a trial-by-trial basis. Both age groups experienced comparable real improvements from lipreading and were similarly aware of its benefits for accuracy and confidence. Yet, older adults were less sensitive to the reduction of listening effort associated with higher lip visibility, particularly those with lower unisensory lipreading abilities (as measured in a visual-only condition). While younger and older adults share similar awareness of lipreading benefits in speech perception, reduced sensitivity to effort reduction may impact older adults' motivation to use lipreading in everyday communication. Given the role of perceived effort in strategy adoption, these findings highlight the importance of addressing effort perceptions in interventions aimed at improving communication in aging populations. https://doi.org/10.23641/asha.30179404.
- Research Article
- 10.1007/s10162-025-01008-w
- Sep 29, 2025
- Journal of the Association for Research in Otolaryngology : JARO
- Ishan Sunilkumar Bhatt + 6 more
The present study employed a data-driven and hypothesis-free approach to identify comorbidities associated with age-related hearing loss (ARHL), speech-in-noise (SIN) deficits, and tinnitus. The study performed phenome-wide co-occurrence association analyses using the UK Biobank cohort to identify comorbidities associated with ARHL (N = 429,318), SIN deficits (N = 437,155), tinnitus (N = 172,527), and tinnitus severity (N = 57,657). Medical health records were accessed to obtain ICD-10 codes, which were converted into phecodes reflecting a modern disease classification. The statistical analysis was conducted to identify comorbidities associated with ARHL, SIN deficits, tinnitus, and tinnitus severity while statistically controlling for age, sex, ethnicity, and genetic ethnicity. Phenotype risk scores (PheRS) for hearing conditions were calculated. A complementary phenome-wide genetic correlation analysis was conducted to identify genetic comorbidities associated with these conditions. We utilized the summary statistics of recent genome-wide association studies of ARHL (N = 723,266), SIN deficits (N = 443,482), tinnitus (N = 132,438), and tinnitus severity (N = 132,438). The results of the phenome-wide association analyses were subjected to enrichment analysis to identify trait categories involved in hearing conditions. A complementary phenome-wide latent causal variant (LCV) analysis was employed to obtain causal inference by distinguishing between horizontal pleiotropy and true causality. The phenome-wide co-occurrence association analysis identified 383, 449, 283, and 216 medical conditions associated (FDR p < 0.05) with ARHL, SIN deficits, tinnitus, and tinnitus severity, respectively. Gastrointestinal conditions revealed significant enrichment across all traits. Respiratory, genitourinary, and sense organs showed significant enrichment with ARHL, SIN deficits, and tinnitus. SIN deficits and tinnitus severity showed significant enrichment with mental Health and neurological conditions. Elevated PheRS significantly increased the risk of expressing their respective phenotypes. A phenome-wide genetic correlation analysis identified 376, 254, 97, and 188 health traits associated with ARHL, SIN deficits, tinnitus, and tinnitus severity, respectively. Mental health and medical symptoms were significantly enriched for all hearing conditions in the genetic correlation analyses. The results of LCV analyses revealed widespread horizontal pleiotropy driving most genetic correlations. In contrast, only a few traits demonstrated a true causal relationship. This study mapped phenotypic and genotypic comorbidity profiles of ARHL, SIN deficits, tinnitus, and tinnitus severity. We observed a robust enrichment of gastrointestinal traits with all hearing conditions, suggesting a potential role of gut dysbiosis in their pathogenesis. The associations between mental health and hearing conditions suggest a complex interplay between auditory and psychological health. Genetic analyses provided compelling evidence that most comorbidities are driven by a shared genetic architecture, rather than true causality.
- Research Article
- 10.1163/22134808-bja10160
- Sep 24, 2025
- Multisensory research
- Yurika Tsuji + 3 more
Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.
- Research Article
- 10.3390/audiolres15050119
- Sep 19, 2025
- Audiology Research
- Konstantinos Drosos + 6 more
Background: Children diagnosed with Speech Sound Disorders (SSDs) encounter difficulties in speech perception, especially when listening in the presence of background noise. Recommended protocols for auditory processing evaluation include behavioral linguistic and speech processing tests, as well as objective electrophysiological measures. The present study compared the auditory processing profiles of children with SSD and typically developing (TD) children using a battery of behavioral language and auditory tests combined with auditory evoked responses. Methods: Forty (40) parents of 7–10 years old Greek Cypriot children completed parent questionnaires related to their children’s listening; their children completed an assessment comprising language, phonology, auditory processing, and auditory evoked responses. The experimental group included 24 children with a history of SSDs; the control group consisted of 16 TD children. Results: Three factors significantly differentiated SSD from TD children: Factor 1 (auditory processing screening), Factor 5 (phonological awareness), and Factor 13 (Auditory Brainstem Response—ABR wave V latency). Among these, Factor 1 consistently predicted SSD classification both independently and in combined models, indicating strong ecological and diagnostic relevance. This predictive power suggests real-world listening behaviors are central to SSD differentiation. The significant correlation between Factor 5 and Factor 13 may suggest an interaction between auditory processing at the brainstem level and higher-order phonological manipulation. Conclusions: This research underscores the diagnostic significance of integrating behavioral and physiological metrics through dimensional and predictive methodologies. Factor 1, which focuses on authentic listening environments, was identified as the strongest predictor. These results advocate for the inclusion of ecologically valid listening items in the screening for APD. Poor discrimination of speech in noise imposes discrepancies between incoming auditory information and retained phonological representations, which disrupts the implicit processing mechanisms that align auditory input with phonological representations stored in memory. Speech and language pathologists can incorporate pertinent auditory processing assessment findings to identify potential language-processing challenges and formulate more effective therapeutic intervention strategies.
- Research Article
- 10.1177/23312165251375891
- Sep 5, 2025
- Trends in Hearing
- Maxime Perron + 2 more
Understanding speech in noise is a common challenge for older adults, often requiring increased listening effort that can deplete cognitive resources and impair higher-order functions. Hearing aids are the gold standard intervention for hearing loss, but cost and accessibility barriers have driven interest in alternatives such as Personal Sound Amplification Products (PSAPs). While PSAPs are not medical devices, they may help reduce listening effort in certain contexts, though supporting evidence remains limited. This study examined the short-term effects of bilateral PSAP use on listening effort using self-report measures and electroencephalography (EEG) recordings of alpha-band activity (8–12 Hz) in older adults with and without hearing loss. Twenty-five participants aged 60 to 87 years completed a hearing assessment and a phonological discrimination task under three signal-to-noise ratio (SNR) conditions during two counterbalanced sessions (unaided and aided). Results showed that PSAPs significantly reduced self-reported effort. Alpha activity in the left parietotemporal regions showed event-related desynchronization (ERD) during the task, reflecting brain engagement in response to speech in noise. In the unaided condition, alpha ERD weakened as SNR decreased, with activity approaching baseline. PSAP use moderated this effect, maintaining stronger ERD under the most challenging SNR condition. Reduced alpha ERD was associated with greater self-reported effort, suggesting neural and subjective measures reflect related dimensions of listening demand. These results suggest that even brief PSAP use can reduce perceived and neural markers of listening effort. While not a replacement for hearing aids, PSAPs may offer a means for easing cognitive load during effortful listening. ClinicalTrials.gov, NCT05076045, https://clinicaltrials.gov/study/NCT05076045
- Research Article
- 10.1007/s00106-025-01666-5
- Sep 5, 2025
- HNO
- Susann Thyson + 4 more
Speech comprehension in aforeign language under noise conditions presents an increased cognitive demand. For multilingual patients with cochlear implants (PwCI), this poses aparticular challenge, as audiological routine diagnostics are typically conducted in the language of the clinical environment. This study investigates speech understanding in noise as well as the subjectively perceived listening effort in PwCI compared to normal-hearing (NH) individuals under both native and nonnative language conditions. PwCI and NH completed the Oldenburg Sentence Test (OLSA) in both German and English. The SNR50 and the subjectively perceived mental effort, measured using the Rating Scale Mental Effort (RSME), were assessed. In addition, the subjective language competence in English as aforeign language was collected using the Common European Framework of Reference for Languages (CEFR). Atotal of 28individuals with German as afirst language and English as aforeign language (14PwCI, 14NH) were included. Among PwCI, the German version of the OLSA was significantly more intelligible than the English version (p = 0.010), whereas no significant difference was found for NH between language conditions. Listening effort was significantly higher during the English version of the OLSA in both PwCI (p = 0.003) and NH (p = 0.003). No correlation was found between self-assessed English language proficiency and perceived effort in either group. The significantly reduced performance of PwCI in their foreign language under noise conditions reflects the established finding that multilingual individuals experience greater difficulty understanding speech in noise. The additionally reduced automatization of linguistic processing as well as alimited use of top-down listening strategies, that is the use of prior knowledge, context and expectations to fill gaps in the acoustic signal, make understanding in the presence of background noise more difficult, which can lead to increased listening effort and more frequent comprehension gaps. These effects appear to be particularly pronounced in multilingual individuals. These results highlight the importance of individualized, linguistically and culturally sensitive approaches in the clinical management of PwCI.
- Research Article
- 10.1371/journal.pone.0331487
- Sep 4, 2025
- PLOS One
- Daniel Fogerty + 1 more
This study examined individual differences in how older adults with normal hearing (ONH) or hearing impairment (OHI) allocate auditory and cognitive resources during speech recognition in noise at equal recognition. Associations between predictor variables and speech recognition were assessed across three datasets that each included 15–16 conditions involving temporally filtered speech. These datasets involved (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise. To minimize effects of audibility differences, speech was spectrally shaped according to each listener’s hearing thresholds. The extended Short-Time Objective Intelligibility metric was used to derive psychometric functions that relate the acoustic degradation to speech recognition. From these functions, speech recognition thresholds (SRTs) were determined at 20%, 50%, and 80% recognition. A multiple regression dominance analysis, conducted separately for ONH and OHI groups, determined the relative importance of auditory and cognitive predictor variables to speech recognition. ONH participants had a stronger association of vocabulary knowledge with speech recognition, whereas OHI participants had a stronger association of speech glimpsing abilities with speech recognition. Combined with measures of working memory and hearing thresholds, these predictors accounted for 73% to 89% of the total variance for ONH and OHI, respectively, and generalized to other diverse measures of speech recognition.
- Research Article
- 10.1044/2025_aja-25-00032
- Sep 2, 2025
- American journal of audiology
- Iyad Ghanim + 1 more
Sentences are encoded with sematic context, which facilitates audiologic ability to navigate background noise, or speech-in-noise (SIN), conditions. To examine how semantic context contributes to performance on one commonly used SIN test, the Quick Speech-in-Noise Test (QuickSIN) by Etymotic Research, Inc. (henceforth "QuickSIN"), we use a novel experimental paradigm that isolates semantic information. Ten college-aged monolingual participants with typical hearing listened to 72 sentences delivered in 0, 5, 10, 15, 20, or 25 dB SNR followed by a choice between two visual words. One word was related to the overall sentence meaning, and the other word was unrelated. The reaction time (RT) to correctly select related targets was measured to index usage of semantic information. Participant's RTs to select a correct response were compared across different signal-to-noise ratios (SNRs). We found that less favorable noise conditions (0, +5 dB SNR) elicited a greater usage of semantic information than more favorable noise conditions (20, 25 dB SNR). Transformed RT data were analyzed with nonparametric tests that assessed the homogeneity of variance within responses to each SNR condition. Results indicated that participants' RTs were consistently varied within each SNR condition, except to sentences in +20 dB SNR, indicating an imbalance in the degree of semantic context used in the sentences in that SNR level. Respondents to the QuickSIN use semantic context to facilitate processing especially at less favorable SNR levels, which is consistent with research supporting a greater role of semantic information during suboptimal listening conditions. Differences in context use across noise conditions means test performance also reflects language processing and should be considered for updated tests of speech-in-noise performance. Critically, responses to sentences at the +20 dB SNR used in the QuickSIN are so inconsistently varied in their degree of semantic usage as to prohibit a clinical interpretation alongside the other conditions. These findings warrant the development of a quick-to-administer SIN test with stimuli that are balanced for semantic expectancy to avoid language effects.