Related Topics
Articles published on Comprehension Of Spoken Language
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
327 Search results
Sort by Recency
- Research Article
- 10.29303/jeef.v5i3.872
- Sep 29, 2025
- Journal of English Education Forum (JEEF)
- Martizha Fauzyah + 2 more
This research investigates the role of short-term memory (STM) in the comprehension of spoken language from a psycholinguistic perspective. Anchored in the theoretical frameworks of Alvarez & Cavanagh (2004), Norris (2017), and Jonides et al. (2008), the study explores how impairments in STM disrupt verbal processing, including the ability to retain, decode, and respond to linguistic input in real time. The analysis centers on Dory, a fictional character in Pixar’s Finding Dory (2016), who is depicted as experiencing persistent short-term memory loss. Employing a qualitative descriptive approach and narrative analysis, twenty scenes were selected to examine manifestations of memory-related language breakdowns in naturalistic conversational contexts. The findings reveal consistent disruptions in Dory’s verbal interactions, particularly in turn-taking, following instructions, and interpreting social cues—phenomena that align with contemporary models of STM as a distinct cognitive system from long-term memory. While emotionally salient information is occasionally retained, the character’s inability to maintain immediate verbal context leads to confusion and emotional distress. These results underscore both the linguistic and psychosocial consequences of STM deficits. By integrating psycholinguistic theory with narrative media, the study provides accessible insight into cognitive-linguistic disorders. Future research should explore real-world populations to substantiate these findings and inform educational or clinical interventions.
- Research Article
- 10.1080/0361073x.2025.2553363
- Sep 7, 2025
- Experimental Aging Research
- Takashi Fujita + 3 more
ABSTRACT Purpose Patients with Alzheimer’s disease (AD) lose the ability to manage their medications as the disease progresses. Several methods have been used to administer medication to patients at home using Internet of Things (IoT) devices for rehabilitation, but no studies have yet been published investigating the factors that influence the success or failure of this approach in older adults and patients with AD. Therefore, this study aimed to investigate differences in medication-related behaviors and their influencing factors in older adults, both with and without AD, using IoT. Materials and methods The study population consisted of 57 patients in the AD group and 34 older adults in the non-AD one, The AD group consisted mainly of patients with mild disease. Both groups conducted a medication management experiment using medication management applications delivered through either “Arata” or “Skype”, and their behaviors and influencing factors were examined. Results Operational errors were observed in both groups. Influencing factors that were common to both “Arata” and “Skype” were comprehension of spoken language and prospective memory. The influencing factors that differed were disorientation and attention.
- Research Article
- 10.1186/s13229-025-00674-0
- Aug 4, 2025
- Molecular autism
- Zihui Hua + 4 more
Language difficulties are common in autism, with several theoretical perspectives proposing that difficulties in forming and updating predictions may underlie the cognitive profile of autism. However, research examining prediction in the language domain among autistic children remains limited, with inconsistent findings regarding prediction efficiency and insufficient investigation of how autistic children incrementally integrate multiple semantic elements during language processing. This study addresses these gaps by investigating both prediction efficiency and incremental processing strategy during spoken language comprehension in autistic children compared to neurotypical peers. Using the visual world paradigm, we compared 45 autistic children (3-8 years) with 52 age-, gender-, and verbal IQ-matched neurotypical children. Participants viewed arrays containing a target object and three semantically controlled distractors (agent-related, action-related, and unrelated) while listening to subject-verb-object structured sentences. Eye movements were recorded to analyze fixation proportions. We employed cluster-based permutation analysis to identify periods of sustained biased looking, growth curve analysis to compare fixation trajectories, and divergence point analysis to determine the onset timing of predictive looking. Both groups demonstrated predictions during spoken language comprehension and employed similar incremental processing strategies, showing increased fixations to both target objects and action-related distractors after verb onset despite the latter's incompatibility with the agent. However, autistic children exhibited reduced prediction efficiency compared to neurotypical peers, evidenced by significantly lower proportions of and slower growth rates in fixations to target objects relative to unrelated distractors, and delayed onset of predictive looking. Reduced prediction efficiency was associated with higher levels of autism symptom severity in the autistic group and increased autistic traits across both groups, with autism-related communication difficulties showing the most robust associations. Our sample included only autistic children without language impairments, limiting generalizability to the broader autism spectrum. The task employed only simple sentence structures in controlled experimental settings, which may not fully capture language processing patterns in naturalistic communication contexts. While autistic children employ similar incremental processing strategies to neurotypical peers during language comprehension, they demonstrate reduced prediction efficiency. Autism symptom severity and autistic traits varied systematically with prediction efficiency, with autism-related communication difficulties showing the strongest associations. These findings enhance our understanding of language processing mechanisms in autism and suggest that interventions targeting language development might benefit from addressing prediction efficiency, such as providing additional processing time and gradually increasing the complexity of semantic integration tasks.
- Research Article
- 10.1101/2025.07.05.663298
- Jul 6, 2025
- bioRxiv : the preprint server for biology
- Eleonora J Beier + 5 more
The rapid, continuous flow of spoken language places strong demands on attention, and it is thought that listeners meet these demands by predicting when important information will occur and allocating attention accordingly. However, to date there is little direct evidence for the involvement of preparatory attention during language processing. In this study, we investigate preparatory attention during spoken language comprehension by measuring alpha neural activity with EEG, a known measure of temporal attentional preparation. Alpha activity leading up to target words that were either focused or defocused by a preceding discourse question did not vary as a function of focus, challenging the assumption that attention is pre-allocated to the timing of focused words. On the other hand, we found that trial-by-trial fluctuations in alpha activity predicted both the depth of processing and the subsequent memory for new information. Specifically, pre-target alpha modulated a centro-parietal Dm subsequent memory effect for focused words, linking preparatory attention to memory encoding during comprehension. Together, these findings bridge psycholinguistic studies on information structure and cognitive neuroscience research on temporal attention, offering novel insights into the role of alpha activity in attentional dynamics during spoken language processing.
- Research Article
- 10.1016/j.rlfa.2025.100524
- Jul 1, 2025
- Revista de Logopedia, Foniatría y Audiología
- Johanna Josephina Geytenbeek
Assessment of spoken language comprehension in persons with complex communication needs
- Research Article
- 10.3390/languages10070161
- Jun 28, 2025
- Languages
- Pumpki Lei Su + 3 more
Although prosodic differences in autistic individuals have been widely documented, little is known about their ability to perceive and interpret specific prosodic features, such as contrastive pitch accent—a prosodic signal that places emphasis and helps listeners distinguish between competing referents in discourse. This study addresses that gap by investigating the extent to which autistic children can (1) perceive contrastive pitch accent (i.e., discriminate contrastive pitch accent differences in speech); (2) interpret contrastive pitch accent (i.e., use prosodic cues to guide real-time language comprehension); and (3) the extent to which their ability to interpret contrastive pitch accent is associated with broader language and social communication skills, including receptive prosody, pragmatic language, social communication, and autism severity. Twenty-four autistic children and 24 neurotypical children aged 8 to 14 completed an AX same–different task and a visual-world paradigm task to assess their ability to perceive and interpret contrastive pitch accent. Autistic children demonstrated the ability to perceive and interpret contrastive pitch accent, as evidenced by comparable discrimination ability to neurotypical peers on the AX task and real-time revision of visual attention based on prosodic cues in the visual-world paradigm. However, autistic children showed significantly slower reaction time during the AX task, and a subgroup of autistic children with language impairment showed significantly slower processing of contrastive pitch accent during the visual-world paradigm task. Additionally, speed of contrastive pitch accent processing was significantly associated with pragmatic language skills and autism symptom severity in autistic children. Overall, these findings suggest that while autistic children as a group are able to discriminate prosodic forms and interpret the pragmatic function of contrastive pitch accent during spoken language comprehension, differences in prosody processing in autistic children may be reflected not in accuracy, but in speed of processing measures and in specific subgroups defined by language ability.
- Research Article
- 10.1097/aud.0000000000001646
- Jun 16, 2025
- Ear and hearing
- Margaret Cychosz + 8 more
Cochlear implants are the most effective means to provide access to spoken language models for children with severe to profound deafness. In typical development, spoken language emerges gradually as children vocally explore and interact with caregivers. But it is unclear how early vocal activity unfolds after children gain access to auditory signals, and thus spoken language, via cochlear implants, and how this early vocal exploration predicts children's spoken language development. This longitudinal study investigated how two formative aspects of early language-child speech productivity and caregiver-child vocal interactions-develop following cochlear implantation, and how these aspects impact children's spoken language outcomes. Data were collected via small wearable recorders that measured caregiver-child communication in the home pre- and for up to 3 years post-implantation (N = 25 children, average = 167 hours/child, 4,180 total hours of observation over an average of 11 unique days/child). Spoken language outcomes were measured using the Preschool Language Scales-5. Growth trajectories were compared with a normative sample of children with typical hearing (N = 329). Even before implantation, all children vocalized and vocally interacted with caregivers. Following implantation, child speech productivity ( β = 9.67, p < 0.001) and caregiver-child vocal interactions ( β = 12.65, p < 0.001) increased significantly faster for children with implants than younger, hearing age-matched typical hearing controls, with the fastest growth occurring in the time following implant activation. There were significant, positive effects of caregiver-child interaction on children's receptive, but not expressive, spoken language outcomes. Overall, children who receive cochlear implants experience robust growth in speech production and vocal interaction-crucial components underlying spoken language-and they follow a similar, albeit faster, developmental timeline as children with typical hearing. Regular vocal interaction with caregivers in the first 1 to 2 years post-implantation reliably predicts children's comprehension of spoken language above and beyond known predictors such as age at implantation.
- Research Article
- 10.1080/23273798.2025.2489611
- Apr 17, 2025
- Language, Cognition and Neuroscience
- Zuzanna Fuchs
ABSTRACT This study investigates the processing of portmanteau morphemes that index more than one noun-category feature – here, grammatical gender and animacy. In Polish, in the accusative singular, animate and inanimate masculine nouns differ in the agreement morphology that they determine on modifying adjectives. This study investigates the spoken-language comprehension of these agreement morphemes using a variant of the Visual World Paradigm that incorporates the Covered Box Paradigm. Results suggest that, after a pre-nominal adjective inflected for masculine animate agreement, looks to the target are reduced in the presence of a (partially matching) feminine animate or masculine inanimate competitor. Analogously, looks to a masculine inanimate target following corresponding portmanteau agreement morphology are reduced in the presence of partially matching competitors. These findings suggest that, during spoken-language comprehension, processing a portmanteau morpheme activates the two noun-category features independently rather than as a single-complex noun class feature.
- Research Article
1
- 10.1111/1460-6984.70025
- Mar 1, 2025
- International journal of language & communication disorders
- Lindsay Pennington + 10 more
Current UK measures of early spoken language comprehension require manipulation of toys and/or verbal responses and are not accessible to children with severe motor impairments. The Computer-Based Instrument for Low motor Language Testing (C-BiLLT) (originally validated in Dutch) is a computerized test of spoken language comprehension that children with motor disorders control using their usual response methods. To create a UK version of the C-BiLLT, evaluate its validity and reliability, and assess its practicability for children with motor disorders. The C-BiLLT was translated into British English and items were adapted to ensure familiarity to UK children. A total of 424 children (233 females, 191 males) aged 1:6-7:5 (years:months) without developmental disabilities were recruited from North East England. Children completed the UK C-BiLLT and Preschool Language Scales 5 (PLS-5) for convergent validity evaluation and either the visual reception subtest of the Mullen Scales of Early Learning (MSEL) (children aged 1:8-5:5) or Ravens Coloured Progressive Matrices (CPM) (ages 5:6-7:5) to assess divergent validity. A total of 33 children completed the UK C-BiLLT within 4 weeks of initial assessment for test-retest reliability assessment (intraclass correlation coefficient-ICC). Internal consistency was assessed using Cronbach's alpha and exploratory factor analysis examined structural validity. A total of 24 children (10 female, 14 male; aged 4-12 years) with non-progressive motor disorders who use augmentative and alternative communication (AAC), rated the UK C-BiLLT's ease of use and completed British Picture Vocabulary Scales (BPVS) and CPM as for convergent and divergent validity testing. Internal consistency was high for children without motor disorders (α=0.96). Exploratory factor analysis extracted two factors, together explaining 68% of the total variance. Test-retest reliability was excellent (ICC=0.95; 0.90-0.98 95% confidence interval-CI). UK C-BiLLT scores correlated highly with PLS-5 (r=0.91) and MSEL (r=0.81), and moderately with CPM (r=0.41); and increased across full-year age-bands (F(6, 407)=341.76, p=< 0.001, η2=0.83). A total of 19 children with motor disorders rated the UK C-BiLLT as easy/ok to use; two judged it hard; three declined to rate the ease of use. Their UK C-BiLLT scores correlated highly with BPVS (r=0.77) and moderately with CPM (r=0.57). The UK C-BiLLT is a valid, reliable measure of early spoken language development and is potentially practicable for children with motor disorders. It may facilitate international research on the language development of children with motor disorders and evaluation of intervention at the national level. What is already known on the subject Young children with motor disorders have difficulties accessing standardized assessments of language comprehension that require children to handle objects or to speak a response. What this paper adds to the existing knowledge This study demonstrates the validity and reliability of a UK translation of the C-BiLLT and suggests that the measure is feasible for children with motor disorders who use AAC and have a reliable method of response via computer access. What are the potential or clinical implications of this work? The UK C-BiLLT is a useful addition to the limited tools currently available to assess early spoken language comprehension of children with motor disorders.
- Research Article
- 10.1037/xge0001677
- Mar 1, 2025
- Journal of experimental psychology. General
- Anthony Yacovone + 3 more
It is well-established that people make predictions during language comprehension--the nature and specificity of these predictions, however, remain unclear. For example, do comprehenders routinely make predictions about which words (and phonological forms) might come next in a conversation, or do they simply make broad predictions about the gist of the unfolding context? Prior EEG studies using tightly controlled experimental designs have shown that form-based prediction can occur during comprehension, as N400s to unexpected words are reduced when they resemble the form of a predicted word (e.g., ceke when expecting cake). One limitation, however, is that these studies often create environments that are optimal for eliciting form-based prediction (e.g., highly constraining sentences, slower-than-natural rates of presentation). Thus, questions remain about whether form-based prediction can occur in settings that more closely resemble everyday comprehension. To address this, the present study explores form-based prediction during naturalistic spoken language comprehension. English-speaking adults listened to a story in which some of the words had been altered. Specifically, we experimentally manipulated whether participants heard the original word from the story (cake), a form-similar nonword (ceke), or a less-similar nonword (vake). Half of the target words were predictable given their context, and the other half were unpredictable. Consistent with the prior work, we found reduced N400s for form-similar nonwords (ceke) relative to less-similar nonwords (vake)-but only in predictable contexts. This study demonstrates that form-based prediction can emerge in naturalistic contexts, and therefore, it is likely to be a common aspect of language comprehension in the wild. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
- Research Article
3
- 10.1093/cercor/bhae479
- Dec 3, 2024
- Cerebral cortex (New York, N.Y. : 1991)
- Chantal Oderbolz + 3 more
Models of phonology posit a hierarchy of prosodic units that is relatively independent from syntactic structure, requiring its own parsing. It remains unexplored how this prosodic hierarchy is represented in the brain. We investigated this foundational question by means of an electroencephalography (EEG) study. Thirty young adults listened to German sentences containing manipulations at different levels of the prosodic hierarchy. Evaluating speech-to-brain cortical entrainment and phase-amplitude coupling revealed that prosody's hierarchical structure is maintained at the neural level during spoken language comprehension. The faithfulness of this tracking varied as a function of the hierarchy's degree of intactness as well as systematic interindividual differences in audio-motor synchronization abilities. The results underscore the role of complex oscillatory mechanisms in configuring the continuous and hierarchical nature of the speech signal and situate prosody as a structure indispensable from theoretical perspectives on spoken language comprehension in the brain.
- Research Article
- 10.1177/10538135241291364
- Nov 22, 2024
- NeuroRehabilitation: An International, Interdisciplinary Journal
- Nikita M Subudhi + 3 more
Background Acquired neurogenic communication disorders are neurological disorders’ most commonly observed consequences. Profiling communication characteristics are a sensitive indicator in the tertiary centre that helps individualised management strategy to improve the quality of life in individuals with neurogenic communication disorders. Objective The research aimed to develop and validate a Comprehensive Level-based Framework for Neurogenic communication disorders (CLFN) by profiling the communication characteristics of individuals with acquired neurogenic communication disorders in a tertiary care centre. Methods The research followed a cross-sectional design and used a convenient sampling process for sample collection. A total of 76 participants were recruited for the research based on selection criteria. The initial administration of CLFN was documented as pre-levels for each domain for all the participants. A re-administration of CLFN was performed after 10 sessions over 7 days of intervention which was documented as post-levels. Results A greater proportion of participants were from the middle-aged group than in the older age group, and males were observed to have a higher frequency occurrence of neurogenic communication disorders than females. Pairwise comparison between pre-levels and post-levels was statistically significant for speech intelligibility, cognitive-communication orientation, cognitive-communication memory, cognitive-communication executive function, communication, spoken language expression, spoken language comprehension, repetition, naming, and writing domains. Conclusions The CLFN for Neurogenic communication disorders can serve as a reference for the holistic assessment of individuals with neurogenic communication disorders in a tertiary care centre. This will monitor the progress evaluation and plan the intervention program before the manifestation of any significant neurogenic impairment, which would improve individuals’ quality of life.
- Research Article
13
- 10.1038/s41467-024-53128-1
- Oct 14, 2024
- Nature Communications
- Hugo Weissbart + 1 more
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain’s ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism.
- Research Article
2
- 10.1162/nol_a_00126
- Aug 15, 2024
- Neurobiology of language (Cambridge, Mass.)
- Hannah Mechtenberg + 3 more
Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum's sensitivity to variation in two well-studied psycholinguistic properties of words-lexical frequency and phonological neighborhood density-during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in each lexical property, consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum's role in word-level processing during continuous listening.
- Research Article
- 10.1163/15507076-bja10028
- Jul 19, 2024
- Heritage Language Journal
- Zuzanna Fuchs + 1 more
Abstract This study investigates facilitative processing of grammatical gender in heritage Spanish speakers whose dominant language is German, using eye-tracking in the Visual World Paradigm. Bilinguals with two gender systems are known to have an integrated mental lexicon with shared gender features that are co-activated during language processing and can result in interference. The present study shows that, despite observed effects of gender congruency with German, heritage speakers were able to use gender information on prenominal articles in Spanish to facilitate lexical retrieval of the target noun. This suggests that processing of gender agreement in the heritage language is resilient to competition from gender in the dominant language during real-time spoken-language comprehension. Moreover, direct comparison with previous results from heritage Spanish speakers in the USA does not show evidence that overall speed of facilitative processing in the heritage language is modulated by the presence or absence of gender in the majority language.
- Research Article
- 10.4103/jisha.jisha_36_24
- Jul 1, 2024
- Journal of Indian Speech Language & Hearing Association
- Nidhi M Desai + 1 more
Abstract Introduction: Speech perception in children provides the basis for discrimination and comprehension of spoken language. Clinicians utilize different speech perception tests depending on the child’s age. The early speech perception (ESP) test is a commonly used assessment tool in young children that evaluates the child’s ability to discriminate and identify words from a closed set of pictures using auditory cues. Methods: A cross-sectional study with purposive sampling method was undertaken to develop and administer Gujarati ESP. The study was conducted in two phases. Phase one included the development of the test material and test-retest reliability analysis. In the second phase, the developed test was administered to 113 typically developing children aged between 3 and 6 years. Two lists consisting of 12 words in each subtest of ESP were administered on all the children. Results: The developed ESP test had good-to-excellent test–retest reliability for both the lists. The developed lists exhibit nonequivalence and therefore cannot be employed interchangeably. Results revealed statistically significant difference in the scores for all subtests between typically developing children across age groups. However, no significant difference in scores was noted between male and female participants for both the lists. Conclusions: The ESP test developed in Gujarati is suitable for administration to typically developing children aged 3–6 years.
- Research Article
1
- 10.32996/ijels.2024.6.2.17
- Jun 4, 2024
- International Journal of English Language Studies
- Dinh Cong Tinh + 2 more
In order to navigate many of the circumstances that arise in everyday life, one must practice active listening. People listen to audio for a variety of reasons, including amusement, learning in scholarly fields, or gathering important information. Students with a wide variety of hearing problems often show symptoms when they try to comprehend the content delivered in English. Children often find it difficult to absorb spoken information since schools place a strong focus on language, reading, and writing. Most course guides and lecturers tend to downplay the importance of listening. The challenges with auditory perception, particularly those pertaining to hearing, actively processing, and comprehending spoken information, are the main focus of this research. Teachers who are aware of the difficulties that their pupils encounter in the classroom may assist their students more successfully in order to help them improve their comprehension of spoken language and acquire excellent listening skills. This is a result of the teachers' improved ability to relate to and comprehend the emotions and experiences of their students. The significance of helping students develop efficient study habits and improve their English listening abilities is emphasized in this article review. Students who are struggling in other courses could get help from teachers who specialize in teaching English as a second language. In the conclusion, the researcher made suggestions for instructional exercises that both professors and pupils need to partake in.
- Research Article
2
- 10.1162/jocn_a_02163
- Apr 22, 2024
- Journal of cognitive neuroscience
- Rong Ding + 2 more
Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1-3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
- Research Article
- 10.31577/sp.2024.01.889
- Mar 20, 2024
- Studia Psychologica
- Haris Memisevic + 2 more
Correlation of Cognitive and Linguistic Factors with Spoken Language Comprehension in Early Elementary Students
- Research Article
- 10.1121/10.0026868
- Mar 1, 2024
- The Journal of the Acoustical Society of America
- Scarlet Wan Yee Li + 2 more
Recent work examining cue integration across levels of linguistic representation has found that listeners can dynamically integrate some of the lower-level and higher-level cues during spoken language comprehension. However, it is still not well understood how the mechanism of cue integration works. This study investigated how adults (n = 52) process preceding higher-level semantic cues and later low-level coarticulation cues during spoken language comprehension using an eye-tracking paradigm. Participants were tested on sentences that contained a prime (semantically related or semantically unrelated to the target) and a target which had varying coarticulation cues (matching versus mismatching splicing cues). Participants were presented with two pictures (target and competitor) on a screen. Analyses looked at the proportion of looking to the target during the prime and target time windows. Results demonstrate that adults flexibly use both the preceding semantic cues and later coarticulatory cues once they are available. Our findings also indicate that adults flexibly weighed both the preceding higher-level and later lower-level cues, such that the processing of low-level coarticulatory cue varied depending on the semantic context. We have added an unstudied level of cue (semantic context) to the set of cues that our cognitive system can integrate during language comprehension.