Rapid decay of perceptual memory in dyslexia.
Rapid decay of perceptual memory in dyslexia.
- Research Article
1
- 10.18203/issn.2454-5929.ijohns20204191
- Sep 23, 2020
- International Journal of Otorhinolaryngology and Head and Neck Surgery
<p class="abstract"><strong>Background:</strong> The P300 was among the first auditory response in a collection of events related or endogenous evoked responses. The P300 is related to cognition and use of knowledge about the environment.</p><p class="abstract"><strong>Methods:</strong> The subjects (n=60) selected with an equal distribution of genders. P300 evoked potentials elicited by non-speech and speech stimuli is recorded. </p><p class="abstract"><strong>Results:</strong> There is a significant difference in latency of P300 for speech verses non-speech stimuli as well as there is a significant difference in the latency of P300 among males and females for speech versus non speech stimuli. No significant difference in amplitude of P300 for speech versus non-speech stimuli and for right versus left ears.</p><p class="abstract"><strong>Conclusions:</strong> P300 latency is influenced by stimulus used and gender variation. The present study showed that the non-speech stimuli had lower latencies compared with speech stimuli. For the P300 amplitude values, the difference between groups were not significant.</p>
- Research Article
- 10.1121/1.4780629
- Apr 1, 2003
- The Journal of the Acoustical Society of America
Our objective is to examine the relation between central auditory processes and discrimination of speech (consonant–vowel) and nonspeech (frequency glide) stimuli. Behavioral responses and auditory evoked potentials (MMN and P300) of ten adults were evaluated to synthetically generated consonant–vowel (CV) speech and nonspeech contrasts. The CVs were two within-category stimuli and the nonspeech stimuli were two frequency glides whose frequencies matched the formant transitions of the CV stimuli. It was found that listeners exhibited significantly better behavioral discrimination to the nonspeech versus speech stimuli in same/different and oddball behavioral paradigms. MMN responses were present in all subjects to both stimulus contrasts, and were not significantly different with regard to stimulus type. P300’s were present in nine of ten subjects to both stimulus contrasts. However, the CV speech contrasts produced P300’s with significantly smaller amplitudes and longer latencies than those to the nonspeech stimuli. These results suggest that the stimuli were processed differently when measured behaviorally and with the P300, but not when measuring the MMN. The enhanced discrimination of the frequency glide stimuli versus the CV stimuli of analogous acoustical content supports the idea that different levels of processing mediate the auditory perception of speech versus nonspeech stimuli.
- Research Article
1
- 10.1163/22134808-000s0015
- Jan 1, 2013
- Multisensory Research
A strong factor influencing multisensory integration is the temporal relationship between the sensory inputs that are combined. Individuals with Autism Spectrum Disorders (ASD) exhibit both atypical multisensory and temporal processing deficits relative to their typically developing (TD) peers. A series of behavioral and fMRI studies from our lab have focused on the link between these two processes. Using speech and non-speech stimuli with parametrically varied temporal relationships between the auditory and visual components, we showed that multisensory temporal processing is indeed altered in ASD, with the largest deficits observed with speech stimuli. The temporal changes seen with simple, non-speech stimuli are strongly correlated with behavioral measures of perceptual binding of audiovisual speech, which suggests that low-level multisensory temporal deficits have cascading effects on speech perception. To explore the neural substrates of these behavioral effects, we implemented an fMRI paradigm in individuals with ASD and TD where we presented synchronous and asynchronous speech and non-speech stimuli. We functionally localized a region in the superior temporal sulcus (pSTS), based on its involvement in multisensory binding and temporal processing and known functional and anatomical differences in ASD. Responses to audiovisual stimuli were extracted and compared across stimulus types and groups. Both TD and ASD groups show reduced pSTS activation with synchronous relative to asynchronous non-speech presentations, reflecting increased processing efficiency. For speech stimuli, only the TD group showed this effect. These data suggest differences in neural processing in pSTS may be at the core of atypical speech perception observed in ASD.
- Research Article
31
- 10.1044/1092-4388(2005/081)
- Oct 1, 2005
- Journal of Speech, Language, and Hearing Research
Auditory event-related potentials (mismatch negativity and P300) and behavioral discrimination were measured to synthetically generated consonant-vowel (CV) speech and nonspeech contrasts in 10 young adults with normal auditory systems. Previous research has demonstrated that behavioral and P300 responses reflect a phonetic, categorical level of processing. The aims of the current investigation were (a) to examine whether the mismatch negativity (MMN) response is also influenced by the phonetic characteristics of a stimulus or if it reflects purely an acoustic level of processing and (b) to expand our understanding of the neurophysiology underlying categorical perception, a phenomenon crucial in the processing of speech. The CVs were 2 within-category stimuli and the nonspeech stimuli were 2 glides whose frequency ramps matched the formant transitions of the CV stimuli. Listeners exhibited better behavioral discrimination to the nonspeech versus speech stimuli in same/different and oddball behavioral paradigms. MMN responses were elicited by the nonspeech stimuli, but absent to CV speech stimuli. Larger amplitude and earlier P300s were elicited by the nonspeech stimuli, while smaller and longer latency P300s were elicited by the speech stimulus contrast. Results suggest that the 2 types of stimuli were processed differently when measured behaviorally, with MMN, or P300. The better discrimination and clearer neurophysiological representation of the frequency glide, nonspeech stimuli versus the CV speech stimuli of analogous acoustic content support (a) categorical perception representation at the level of the MMN generators and (b) parallel processing of acoustic (sensory) and phonetic (categorical) information at the level of the MMN generators.
- Research Article
18
- 10.1016/j.heares.2014.04.009
- May 10, 2014
- Hearing Research
Brainstem response to speech and non-speech stimuli in children with learning problems
- Research Article
- 10.1016/j.neuropsychologia.2025.109199
- Sep 1, 2025
- Neuropsychologia
Hemispheric laterality in neural processing of speech and non-speech temporal information on multiple timescales.
- Research Article
19
- 10.1016/j.dcn.2016.04.001
- Apr 14, 2016
- Developmental Cognitive Neuroscience
Auditory evoked potentials to speech and nonspeech stimuli are associated with verbal skills in preschoolers
- Research Article
24
- 10.1016/j.bandc.2010.09.005
- Nov 9, 2010
- Brain and Cognition
Effects of audio–visual integration on the detection of masked speech and non-speech sounds
- Research Article
- 10.14738/assrj.24.907
- Apr 25, 2015
- Advances in Social Sciences Research Journal
The study focus was on factors inhibiting the mentally challenged in the acquisition of skills. Objectives were to establish the challenges facing mentally handicapped learners in acquisition of learning skills; to establish the effect of parents and community support and involvement on the acquisition of learning skills; to determine the effect of the role of teachers and teaching methods used on the acquisition of learning skills; and to find out the effect of learners characteristics on the acquisition of learning skills among mentally handicapped learners. Descriptive survey design and purposive sampling techniques were used to sample the respondents. A total of 76 respondents responded to the research instruments. Among the major findings, the study established that the major challenges facing the acquisition of leaning among the mentally handicapped included: students’ involvement in interruptive behaviours which interferes with cognitive functioning and inability to cope with frustrations. The role of teachers and teaching methods was not a hindrance to the acquisition of learning skills among the mentally challenged students. The study recommended that awareness should be created through dissemination of information on mental handicap learners. The study suggests a comparative study on the process of acquisition of learning skills be done among normal and mentally challenged learners.
- Research Article
33
- 10.1016/j.neuroimage.2005.05.040
- Jul 14, 2005
- NeuroImage
Discrimination and categorization of speech and non-speech sounds in an MEG delayed-match-to-sample study
- Research Article
39
- 10.1016/s0010-9452(75)80027-x
- Dec 1, 1975
- Cortex
Simple Reaction-Times to Speech and Non-Speech Stimuli
- Research Article
- 10.3389/conf.neuro.09.2009.01.156
- Jan 1, 2008
- Frontiers in Human Neuroscience
Event Abstract Back to Event Dichotic stimulation accentuates hemispheric asymmetry in pre-attentive change detection for different acoustic features Rika Takegata1*, C. Jacquier2, S. Pakarinen1, T. Kujala1 and R. Näätänen1, 3 1 University of Helsinki, Finland 2 CNRS and Université Louis Lumière Lyon 2, France 3 University of Tartu, Estonia Introduction: The left and right hemispheres show functional asymmetries in the processing of acoustic (e.g., temporal) features that are crucial for speech perception. Anomalous asymmetry for a certain acoustic feature may provide a clue to identify the source of deteriorated speech perception. The current study examined the effects of the manner (dichotic vs. monaural) and the ear (left vs. right) of stimulation on the mismatch negativity (MMN) for speech and non-speech sounds, pursuing a fast and reliable method to examine the hemispheric asymmetry at pre-attentive level. Methods: Speech and non-speech stimulus sequences each comprised a frequent sound (standard: S) and four types of infrequent sounds (deviant: D) that differed from the standard in duration, frequency, intensity, or vowel (or an equivalent temporal-spectral change in non-speech stimuli). The standard and the deviant stimuli appeared alternately (e.g., S Dfrequency S DDuration S…), according to the fast paradigm developed by Näätänen et al. (2004, Clin Neurophysiol). In dichotic condition, speech and non-speech stimuli were presented alternately, with speech stimuli to the right ear and non-speech stimuli to the left ear. The stimulus-ear relation was reversed for half of the experiment. In monaural condition, either the speech or non-speech stimuli alone presented to a single ear. Subjects watched silent films with subtitles and ignore the stimulus sounds. Results and Discussion: The ear of stimulation had a significant effect on the MMN amplitude for the speech and non-speech stimuli in the dichotic condition whereas it had no effect in the monaural condition (Fig. 1). The results indicated that dichotic stimulation accentuated differential processing of speech vs. non-speech sounds even at preattentive level. The observed ear effects may reflect asymmetric hemispheric contributions for the processing of different acoustic features. tn_pic1 pic Conference: 10th International Conference on Cognitive Neuroscience, Bodrum, Turkey, 1 Sep - 5 Sep, 2008. Presentation Type: Oral Presentation Topic: Change Detection Citation: Takegata R, Jacquier C, Pakarinen S, Kujala T and Näätänen R (2008). Dichotic stimulation accentuates hemispheric asymmetry in pre-attentive change detection for different acoustic features. Conference Abstract: 10th International Conference on Cognitive Neuroscience. doi: 10.3389/conf.neuro.09.2009.01.156 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 05 Dec 2008; Published Online: 05 Dec 2008. * Correspondence: Rika Takegata, University of Helsinki, Helsinki, Finland, rika.takegata@helsinki.fi Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Rika Takegata C. Jacquier S. Pakarinen T. Kujala R. Näätänen Google Rika Takegata C. Jacquier S. Pakarinen T. Kujala R. Näätänen Google Scholar Rika Takegata C. Jacquier S. Pakarinen T. Kujala R. Näätänen PubMed Rika Takegata C. Jacquier S. Pakarinen T. Kujala R. Näätänen Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
- Research Article
3
- 10.2174/1874082001307010005
- Oct 18, 2013
- The Open Neuroscience Journal
This study investigated the extent to which audiovisual speech integration is special by comparing behavioral and neural measures using both speech and non-speech stimuli. An audiovisual recognition experiment presenting listen- ers with auditory, visual, and audiovisual stimuli was implemented. The auditory component consisted of sine wave speech, and the visual component consisted of point light displays, which include point-light dots that highlight a talker's points of articulation. In the first phase, listeners engaged in a discrimination task where they were unaware of the linguis- tic nature of the auditory and visual stimuli. In the second phase, they were informed that the auditory and visual stimuli were spoken utterances of /be/ (bay) and /de/ (day), and they engaged in the same task. The neural dynamics of audiovisual integration was investigated by utilizing EEG, including mean Global Field Power and current density recon- struction (CDR). As predicted, support for divergent regions of multisensory integration between the speech and non- speech stimuli was obtained, namely greater posterior parietal activation in the non-speech condition. Conversely, reac- tion-time measures indicated qualitatively similar multisensory integration across experimental conditions.
- Research Article
1
- 10.1121/10.0019039
- Mar 1, 2023
- The Journal of the Acoustical Society of America
Previous research on tone perception has identified several important pitch related cues including average pitch height (AH), contour, onset and offset, and the weighting of these cues has shown to be language dependent. However, since multiple pitch cues are covarying with each other, few studies have directly compared relative importance of these cues. It is also not clear whether there is a same ranking of cues in speech and non-speech stimuli. The current study aims to tease apart the relative role of each cue using AX discrimination. Four pairs of tone contrasts with minimal pitch differences were created. Tone contrasts within contour condition are two level tones with 7 Hz differences. Tone contrasts within AH, onset and offset conditions have one rising tone and one falling tone sharing the same AH, onset and offset respectively. If one cue is important, then when this cue is kept constant, variation in other cues should be hard to perceive. 48 Mandarin speakers and 48 Cantonese speakers were recruited. Results showed highest importance of AH for both Mandarin and Cantonese listeners and higher importance of contour (offset) than onset for Mandarin (Cantonese) listeners in speech stimuli. This ranking was not held for nonspeech stimuli.
- Research Article
17
- 10.1121/1.1981612
- Jan 1, 1972
- The Journal of the Acoustical Society of America
Temporal order judgment (TOJ) in dichotic listening can be a difficult task. Previous experiments that used two speech stimuli on each trial (S/S) obtained sizable error rates when subjects were required to report which ear led (TOJ-by-ear). When subjects were required to identify the leading stimulus (TO J-by-stimulus), the error rate increased substantially. Apparently, the two speech stimuli were competing for analysis by the same processor, and so were overloading it. The present experiment used the same TOJ tasks, but presented a speech and a nonspeech stimulus on each trial (S/NS). The error rate was comparable to that of S/S for TO J-by-ear, but did not increase for TO J-by-stimulus. This would be expected if the speech and nonspeech stimuli are being sent to different processors, each of which performs its analysis without interference from the other. The interpretation of the data given here is consistent with the results of standard identification experiments reported elsewhere: when asked to identify both stimuli on each dichotic trial, subjects made many errors on S/S, while performance was virtually error-free on S/NS.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.