Time-efficient Methodology for Robustly Assessing Speech-related Mismatch Responses in Adults and Infants.
The mismatch response (MMR) is a critical neural indicator of discrimination of speech contrasts. Using MMRs, previous research has demonstrated that language experience can affect MMRs, such that MMRs to native speech contrasts are different from ones to nonnative speech contrasts. This effect is observed as early as 11-12 months, but not at 6-7 months of age, indicating early learning of speech sounds. Yet, many challenges remain to use MMR to advance our understanding of speech learning especially in infants, including prolonged recording time, inefficient use of data, and a lack of reconciliation of MMR recorded using different technologies (i.e., EEG vs. magnetoencephalography [MEG]). Using an improved recording paradigm and analysis approaches, the current study addressed these challenges by examining (1) whether MEG-MMR is linked to well-established EEG-MMR in the same adults and (2) whether our methods capture the difference of MEG-MMR between native and nonnative speech contrasts in adults and (3) in older infants. Results from 18 adults with simultaneous M/EEG demonstrated a high correlation between the MEG-MMR and the EEG-MMR. Additionally, MEG-MMRs to native speech contrasts were different from ones to nonnative speech contrasts, replicating spatiotemporal patterns documented in existing literature. Finally, we replicated this effect in the MEG-MMR in 14 infants aged between 9 and 14 months using the same methods. These findings validate our new methodologies (less than 15 min) for acquiring and analyzing speech-related MMR across ages, paving the way for studying early language development, and improving early detection of language-related disorders.
- Research Article
19
- 10.1016/j.clinph.2018.02.132
- Mar 16, 2018
- Clinical Neurophysiology
The relationship between mismatch response and the acoustic change complex in normal hearing infants
- Research Article
17
- 10.1097/aud.0000000000000726
- Jan 1, 2019
- Ear & Hearing
To examine maturation of neural discriminative responses to an English vowel contrast from infancy to 4 years of age and to determine how biological factors (age and sex) and an experiential factor (amount of Spanish versus English input) modulate neural discrimination of speech. Event-related potential (ERP) mismatch responses (MMRs) were used as indices of discrimination of the American English vowels [ε] versus [I] in infants and children between 3 months and 47 months of age. A total of 168 longitudinal and cross-sectional data sets were collected from 98 children (Bilingual Spanish-English: 47 male and 31 female sessions; Monolingual English: 48 male and 42 female sessions). Language exposure and other language measures were collected. ERP responses were examined in an early time window (160 to 360 msec, early MMR [eMMR]) and late time window (400 to 600 msec, late MMR). The eMMR became more negative with increasing age. Language experience and sex also influenced the amplitude of the eMMR. Specifically, bilingual children, especially bilingual females, showed more negative eMMR compared with monolingual children and with males. However, the subset of bilingual children with more exposure to English than Spanish compared with those with more exposure to Spanish than English (as reported by caretakers) showed similar amplitude of the eMMR to their monolingual peers. Age was the only factor that influenced the amplitude of the late MMR. More negative late MMR was observed in older children with no difference found between bilingual and monolingual groups. Consistent with previous studies, our findings revealed that biological factors (age and sex) and language experience modulated the amplitude of the eMMR in young children. The early negative MMR is likely to be the mismatch negativity found in older children and adults. In contrast, the late MMR amplitude was influenced only by age and may be equivalent to the Nc in infants and to the late negativity observed in some auditory passive oddball designs.
- Research Article
13
- 10.1016/j.dcn.2022.101127
- Jun 22, 2022
- Developmental cognitive neuroscience
Longitudinal trajectories of electrophysiological mismatch responses in infant speech discrimination differ across speech features
- Research Article
14
- 10.3389/fnins.2021.686027
- Sep 3, 2021
- Frontiers in Neuroscience
Preterm birth carries a risk for adverse neurodevelopment. Cognitive dysfunctions, such as language disorders may manifest as atypical sound discrimination already in early infancy. As infant-directed singing has been shown to enhance language acquisition in infants, we examined whether parental singing during skin-to-skin care (kangaroo care) improves speech sound discrimination in preterm infants. Forty-five preterm infants born between 26 and 33 gestational weeks (GW) and their parents participated in this cluster-randomized controlled trial (ClinicalTrials ID IRB00003181SK). In both groups, parents conducted kangaroo care during 33–40 GW. In the singing intervention group (n = 24), a certified music therapist guided parents to sing or hum during daily kangaroo care. In the control group (n = 21), parents conducted standard kangaroo care and were not instructed to use their voices. Parents in both groups reported the duration of daily intervention. Auditory event-related potentials were recorded with electroencephalogram at term age using a multi-feature paradigm consisting of phonetic and emotional speech sound changes and a one-deviant oddball paradigm with pure tones. In the multi-feature paradigm, prominent mismatch responses (MMR) were elicited to the emotional sounds and many of the phonetic deviants in the singing intervention group and in the control group to some of the emotional and phonetic deviants. A group difference was found as the MMRs were larger in the singing intervention group, mainly due to larger MMRs being elicited to the emotional sounds, especially in females. The overall duration of the singing intervention (range 15–63 days) was positively associated with the MMR amplitudes for both phonetic and emotional stimuli in both sexes, unlike the daily singing time (range 8–120 min/day). In the oddball paradigm, MMRs for the non-speech sounds were elicited in both groups and no group differences nor connections between the singing time and the response amplitudes were found. These results imply that repeated parental singing during kangaroo care improved auditory discrimination of phonetic and emotional speech sounds in preterm infants at term age. Regular singing routines can be recommended for parents to promote the development of the auditory system and auditory processing of speech sounds in preterm infants.
- Research Article
1
- 10.1121/1.4708064
- Apr 1, 2012
- The Journal of the Acoustical Society of America
We investigated the relation between language exposure and neural commitment to the phonetic units of language in 11-14 month-old English monolingual (N=22) and English-Spanish bilingual infants (N=22). Our previous work suggested that bilingual infants develop phonetic neural commitment at a different pace than their monolingual peers (Garcia-Sierra et al., 2011). However, interpretation of the bilingual data requires testing a speech contrast that is non-native for both bilinguals and monolinguals. We assessed language exposure using LENA digital recorders. Neural speech discrimination (English, Spanish, Mandarin) was tested using event-related potentials (ERPs) to determine the Mismatch Response (MMR). Both groups showed significant correlations between MMRs and language exposure. However, monolinguals showed negative MMRs and negative correlations between MMR and exposure; bilinguals showed positive MMRs and positive correlations with exposure. Negative MMRs are interpreted as an established commitment to native speech sounds. Positive MMRs are interpreted as an initial ability to discriminate sounds. No correlations were found between Mandarin-MMRs and language exposure. Another phonetic contrast (Hindi), nonnative for both groups, is now being tested in the monolingual and bilingual children. Our results support the view that bilingual and monolingual infants show a different pattern of speech perception development.
- Research Article
28
- 10.1016/j.neulet.2012.07.064
- Aug 9, 2012
- Neuroscience Letters
Neural mismatch indices of vowel discrimination in monolingually and bilingually exposed infants: Does attention matter?
- Research Article
- 10.1121/10.0015536
- Oct 1, 2022
- The Journal of the Acoustical Society of America
The mismatch response (MMR) is a common neural signature, both in M/EEG, to evaluate neural sensitivity to sound change. Furthermore, the complex auditory brainstem response (cABR) has gained wide research interest recently as it is argued to reflect early sensory encoding of complex sounds, such as speech, along the auditory pathway. While both measures are important in infants as babies undergo rapid speech learning, they also share the crucial drawback that it takes many trials and thus long recording times, prohibiting a wide usage in infant population. Here, we investigate a new and more efficient recording paradigm to simultaneously assess both MMR and cABR for speech in MEG. Adult participants are recorded under this new paradigm using simultaneous M/EEG with previously published speech stimuli. For MMR, we aim to replicate previously published results that MMR for a native speech contrast is more concentrated than for a nonnative speech contrast. For cABR, we aim to extract a predominant spatiotemporal pattern from all MEG channels and examine its correlation with the EEG-recorded signal. Once the new paradigm can be validated in adults, it can be used in infant populations with much increased efficiency, opening the door for addressing new research questions.
- Research Article
1
- 10.1044/2024_jslhr-23-00820
- Jan 2, 2025
- Journal of speech, language, and hearing research : JSLHR
This study aimed to investigate infants' neural responses to changes in emotional prosody in spoken words. The focus was on understanding developmental changes and potential sex differences, aspects that were not consistently observed in previous behavioral studies. A modified multifeature oddball paradigm was used with emotional deviants (angry, happy, and sad) presented against neutral prosody (standard) within varying spoken words during a single electroencephalography recording session. The reported data included 34 infants (18 males, 16 females; age range: 3-12 months, average age: 7 months 26 days). Infants exhibited distinct patterns of mismatch responses (MMRs) to different emotional prosodies in both early (100-200 ms) and late (300-500 ms) time windows following the speech onset. While both happy and angry prosodies elicited more negative early MMRs than the sad prosody across all infants, older infants showed more negative early MMRs than their younger counterparts. The distinction between early MMRs to angry and sad prosodies was more pronounced in younger infants. In the late time window, angry prosody elicited a more negative late MMR than the sad prosody, with younger infants showing more distinct late MMRs to sad and angry prosodies compared to older infants. Additionally, a sex effect was observed as male infants displayed more negative early MMRs compared to females. These findings demonstrate the feasibility of the modified multifeature oddball protocol in studying neural sensitivities to emotional speech in infancy. The observed age and sex effects on infants' auditory neural responses to vocal emotions underscore the need for further research to distinguish between acoustic and emotional processing and to understand their roles in early socioemotional and language development. https://doi.org/10.23641/asha.27914553.
- Research Article
43
- 10.1016/j.clinph.2019.01.019
- Feb 12, 2019
- Clinical Neurophysiology
An extensive pattern of atypical neural speech-sound discrimination in newborns at risk of dyslexia
- Research Article
19
- 10.1016/j.clinph.2013.11.035
- Dec 13, 2013
- Clinical Neurophysiology
The impact of spectral resolution on the mismatch response to Mandarin Chinese tones: An ERP study of cochlear implant simulations
- Research Article
28
- 10.1371/journal.pone.0109806
- Oct 7, 2014
- PLoS ONE
Distributional learning of speech sounds (i.e., learning from simple exposure to frequency distributions of speech sounds in the environment) has been observed in the lab repeatedly in both infants and adults. The current study is the first attempt to examine whether the capacity for using the mechanism is different in adults than in infants. To this end, a previous event-related potential study that had shown distributional learning of the English vowel contrast /æ/∼/ε/ in 2-to-3-month old Dutch infants was repeated with Dutch adults. Specifically, the adults were exposed to either a bimodal distribution that suggested the existence of the two vowels (as appropriate in English), or to a unimodal distribution that did not (as appropriate in Dutch). After exposure the participants were tested on their discrimination of a representative [æ] and a representative [ε], in an oddball paradigm for measuring mismatch responses (MMRs). Bimodally trained adults did not have a significantly larger MMR amplitude, and hence did not show significantly better neural discrimination of the test vowels, than unimodally trained adults. A direct comparison between the normalized MMR amplitudes of the adults with those of the previously tested infants showed that within a reasonable range of normalization parameters, the bimodal advantage is reliably smaller in adults than in infants, indicating that distributional learning is a weaker mechanism for learning speech sounds in adults (if it exists in that group at all) than in infants.
- Research Article
39
- 10.1016/j.ridd.2015.10.002
- Oct 23, 2015
- Research in Developmental Disabilities
Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?
- Research Article
18
- 10.1111/ejn.15671
- Apr 1, 2022
- European Journal of Neuroscience
From auditory rhythm patterns, listeners extract the underlying steady beat and perceptually group beats to form metres. While previous studies show infants discriminate different auditory metres, it remains unknown whether they can maintain (imagine) a metrical interpretation of an ambiguous rhythm through top-down processes. We investigated this via electroencephalographic mismatch responses. We primed 6-month-old infants (N = 24) to hear a 6-beat ambiguous rhythm either in duple metre (n = 13) or in triple metre (n = 11) through loudness accents either on every second or every third beat. Periods of priming were inserted before sequences of the ambiguous unaccented rhythm. To elicit mismatch responses, occasional pitch deviants occurred on either beat 4 (strong beat in triple metre; weak in duple) or beat 5 (strong in duple; weak in triple) of the unaccented trials. At frontal left sites, we found a significant interaction between beat and priming group in the predicted direction. Post-hoc analyses showed that mismatch response amplitudes were significantly larger for beat 5 in the duple-primed than triple-primed group (p = .047) and were non-significantly larger for beat 4 in the triple-primed than duple-primed group. Further, amplitudes were generally larger in infants with musically experienced parents. At frontal right sites, mismatch responses were generally larger for those in the duple compared with triple group, which may reflect a processing advantage for duple metre. These results indicate that infants can impose a top-down, internally generated metre on ambiguous auditory rhythms, an ability that would aid early language and music learning.
- Research Article
19
- 10.1016/j.neuroimage.2022.119242
- Apr 25, 2022
- NeuroImage
Development of infants’ neural speech processing and its relation to later language skills: A MEG study
- Research Article
38
- 10.3389/fpsyg.2014.00077
- Jan 1, 2014
- Frontiers in Psychology
An important mechanism for learning speech sounds in the first year of life is “distributional learning,” i.e., learning by simply listening to the frequency distributions of the speech sounds in the environment. In the lab, fast distributional learning has been reported for infants in the second half of the first year; the present study examined whether it can also be demonstrated at a much younger age, long before the onset of language-specific speech perception (which roughly emerges between 6 and 12 months). To investigate this, Dutch infants aged 2 to 3 months were presented with either a unimodal or a bimodal vowel distribution based on the English /æ/~/ε/ contrast, for only 12 minutes. Subsequently, mismatch responses (MMRs) were measured in an oddball paradigm, where one half of the infants in each group heard a representative [æ] as the standard and a representative [ε] as the deviant, and the other half heard the same reversed. The results (from the combined MMRs during wakefulness and active sleep) disclosed a larger MMR, implying better discrimination of [æ] and [ε], for bimodally than unimodally trained infants, thus extending an effect of distributional training found in previous behavioral research to a much younger age when speech perception is still universal rather than language-specific, and to a new method (using event-related potentials). Moreover, the analysis revealed a robust interaction between the distribution (unimodal vs. bimodal) and the identity of the standard stimulus ([æ] vs. [ε]), which provides evidence for an interplay between a perceptual asymmetry and distributional learning. The outcomes show that distributional learning can affect vowel perception already in the first months of life.
- Research Article
- 10.1162/jocn.a.2407
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.2409
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.2405
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.2410
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.2406
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.2408
- Nov 3, 2025
- Journal of cognitive neuroscience
- Research Article
1
- 10.1162/jocn.a.57
- Nov 1, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.54
- Nov 1, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.77
- Oct 20, 2025
- Journal of cognitive neuroscience
- Research Article
- 10.1162/jocn.a.66
- Oct 20, 2025
- Journal of cognitive neuroscience
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.