THE EFFECT OF BILINGUALISM ON UNFAMILIAR LANGUAGE PERCEPTION

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Background. A growing number of scientists are studying the mechanisms of different aspects of language acquisition and perception and searching for new methods of mastering them. The impact of bilingualism on cognitive functions, such as attention, memory, concentration, etc., is also important. Methods. The present study, based on the EEG recording technique, highlights the influence of bilingualism on the perception of different languages, namely the native language (represented by Ukrainian), a second language mastered at a certain level (represented by English), and a language that has not been previously learned at any level (represented by Finnish). These languages belong to different language groups, which makes it impossible to fully or partially understand words or phrases based on associations with similar linguistic structures in familiar languages. The purpose of this study was not just to prove the difference in the perception of different languages but to describe in detail the change in electrical activity in the brain, to investigate which frequency bands and sub-bands are involved, and which brain regions may be responsible for this function. For this study, 20 bilingual and multilingual students aged 18-22 were involved, and they voluntarily agreed to participate in the survey. Results. The study results showed a statistically significant difference in the perception of the following languages in pairs: Ukrainian and English, English and Finnish, and Ukrainian and Finnish. This difference is most pronounced in the β1 and β2 frequency sub-bands. The following brain areas are involved in processing languages of different language groups with different intensities: the occipital part of the right and left hemispheres, the temporal part of the left hemisphere, and the parietal part of the right hemisphere. Conclusions. The observed neural differences in the perception of known and unknown languages provide further evidence that language comprehension relies on both auditory and cognitive processing mechanisms, engaging different brain regions depending on familiarity with the language. The increased activation in the occipito-temporal and parietal regions during language processing suggests that both linguistic and non-linguistic factors—such as phonological familiarity and semantic expectations—play a crucial role in language perception. A detailed study of bilingualism and mechanisms of language perception opens up prospects for the creation of improved methods of teaching foreign languages, which will accordingly expand people's ability to use large amounts of information to improve their knowledge and skills.

Similar Papers
  • Research Article
  • Cite Count Icon 47
  • 10.1037/a0015869
Differential neural contributions to native- and foreign-language talker identification.
  • Jan 1, 2009
  • Journal of Experimental Psychology: Human Perception and Performance
  • Tyler K Perrachione + 2 more

Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system's ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. Listeners' right-ear (left hemisphere) performance was a better predictor of binaural accuracy in their native language than a foreign one. This enhanced role of the classically language-dominant left hemisphere in listeners' native language demonstrates functionally integrated neural systems for speech and voice perception during talker identification.

  • Research Article
  • Cite Count Icon 3
  • 10.1111/j.1749-818x.2009.00184.x
Teaching and Learning Guide for: Speaking and Hearing Clearly: Talker and Listener Factors in Speaking Style Changes
  • Mar 1, 2010
  • Language and Linguistics Compass
  • Rajka Smiljanic + 1 more

Teaching and Learning Guide for: Speaking and Hearing Clearly: Talker and Listener Factors in Speaking Style Changes

  • Research Article
  • Cite Count Icon 106
  • 10.1017/s0142716418000152
Perceptual beginnings to language acquisition
  • Jul 1, 2018
  • Applied Psycholinguistics
  • Janet F Werker

In this article, I present a selective review of research on speech perception development and its relation to reference, word learning, and other aspects of language acquisition, focusing on the empirical and theoretical contributions that have come from my laboratory over the years. Discussed are the biases infants have at birth for processing speech, the mechanisms by which universal speech perception becomes attuned to the properties of the native language, and the extent to which changing speech perception sensitivities contribute to language learning. These issues are reviewed from the perspective of both monolingual and bilingual learning infants. Two foci will distinguish this from my previous reviews: first and foremost is the extent to which contrastive meaning and referential intent are not just shaped by, but also shape, changing speech perception sensitivities, and second is the extent to which infant speech perception is multisensory and its implications for both theory and methodology.

  • Research Article
  • Cite Count Icon 5
  • 10.1044/leader.ftr2.11082006.6
Spoken Language Processing: A Convergent Approach to Conceptualizing (Central) Auditory Processing
  • Jun 1, 2006
  • The ASHA Leader
  • Larry Medwetsky

Spoken Language Processing: A Convergent Approach to Conceptualizing (Central) Auditory Processing

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 186
  • 10.1523/jneurosci.1828-18.2019
Neural Speech Tracking in the Theta and in the Delta Frequency Band Differentially Encode Clarity and Comprehension of Speech in Noise
  • May 20, 2019
  • The Journal of Neuroscience
  • Octave Etard + 1 more

Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands, which have been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, whereas the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.SIGNIFICANCE STATEMENT Speech is a highly complex signal whose processing requires analysis from lower-level acoustic features to higher-level linguistic information. Recent work has shown that neural activity in the delta and theta frequency bands track the rhythm of speech, but the role of this tracking for speech processing remains unclear. Here we disentangle the roles of cortical entrainment in different frequency bands and at different temporal lags for speech clarity, reflecting the acoustics of the signal, and speech comprehension, related to linguistic processing. We show that cortical speech tracking in the theta frequency band encodes mostly speech clarity, and thus acoustic aspects of the signal, whereas speech tracking in the delta band encodes the higher-level speech comprehension.

  • Book Chapter
  • 10.1093/acrefore/9780199384655.013.415
Second Language Phonetics
  • Apr 26, 2018
  • Ocke-Schwen Bohn

The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.

  • Research Article
  • Cite Count Icon 18
  • 10.1097/00003446-200112000-00011
Speech Science: An Integrated Approach to Theory and Clinical Practice
  • Dec 1, 2001
  • Ear and Hearing
  • Carole T Ferrand

Each chapter concludes with Case Study and Questions, Summary and Review Exercises. Foreword. Acknowledgments. 1. Introduction. Overview of Chapters. 2. The Nature of Sound. Air Pressure. Measurement of Air Pressure Movement of Air Air Pressure, Volume, and Density Sound: Changes in Air Pressure Elasticity and Inertia Wave Motion of Sound Characteristics of Sound Waves Frequency and Period Velocity and Wavelength Sound Absorption and Reflection Constructive and Destructive Interference Pure Tones and Complex Waves Speech as a Stream of Complex Periodic and Aperiodic Waves Visually Depicting Sound Waves: Waveforms and Spectra Attributes of Sounds Frequency and Pitch Human Range of Hearing Amplitude and Intensity Amplitude Intensity Decibel Scale Advantages of the Decibel Scale Auditory Area Resonance Free and Forced Vibration Types of Resonators Acoustic Resonators Acoustic Resonators as Filters Bandwidth Cutoff Frequencies Resonance Curves Parameters of a Filter Types of Filters 3. Clinical Application of Frequency and Intensity Variables. Vocal Frequency and Amplitude Frequency Variables Average Fundamental Frequency Frequency Variability Maximum Phonational Frequency Range Amplitude and Intensity Variables Average Amplitude Level Amplitude Variability Dynamic Range Voice Range Profile Breakdowns in Control of Vocal Frequency and Amplitude Voice Disorders Neurological Disorders 4. The Respiratory System. The Structure and Mechanics of the Respiratory System Structures of the Lower Respiratory System Bronchial Tree Muscles of Respiration Accessory Muscles of Respiration Muscles of the Abdomen Pleural Linkage Moving Air Into and Out of the Lungs Inhalation Exhalation Rate of Breathing Lung Volumes and Capacities Resting Expiratory Level Lung Volumes Tidal Volume Inspiratory Reserve Volume Expiratory Reserve Volume Residual Volume Dead Air Lung Capacities Vital Capacity Functional Residual Capacity Total Lung Capacity Development of Lung Volumes and Capacities Differences Between Breathing for Life and Breathing for Speech Location of Air Intake Ratio of Time for Inhalation versus Exhalation Volume of Air Inhaled per Cycle Muscle Activity for Exhalation Air Pressures and Flows in Respiration Air Pressures Airflow Lung Volume and Chest Wall Shape Breathing Patterns for Speech Changes in Speech Breathing over the Life-span 5. Clinical Application: Respiratory Breakdowns That Affect Speech Production. Conditions That Affect Speech Breathing Parkinson's Disease Cerebellar Disease Cervical Spinal Cord Injury Cerebral Palsy Mechanical Ventilation Voice Disorders Hearing Impairment 6. The Phonatory System. The Vocal Mechanism Laryngeal Skeleton Bones and Cartilages Joints of the Larynx Valves within the Larynx Aryepiglottic Folds False Vocal Folds True Vocal Folds Cover-body Model Glottis Muscles of the Larynx Extrinsic Muscles Intrinsic Muscles Myoelastic-Aerodynamic Theory of Phonation Vertical and Longitudinal Phase Differences during Vibration Voice Fundamental Frequency Voice Intensity Pressures Involved in Phonation The Complex Sound Wave of the Human Voice Glottal Spectrum Harmonic Spacing Nearly Periodic Nature of the Human Voice Sources of fitter and Shimmer Measurement of Jitter and Shimmer Vocal Registers and Vocal Quality Vocal Registers Physiologic and Acoustic Bases of Pulse and Falsetto Registers Pulse Falsetto Spectral Characteristics of Pulse and Falsetto Use of Different Registers in Singing and Speaking Voice Quality Normal Voice Quality Abnormal Voice Qualities Acoustic Characteristics of Breathy and Rough or Hoarse Voice Breathy Voice Rough or Hoarse Voice Ways of Measuring Registers and Quality Electroglottography EGG and Register EGG Slope Quotients 7. Clinical Application: Measures of Jitter, Shimmer, and Quality. Jitter and Shimmer Measures Jitter and Shimmer Measures in Communication Disorders Amyotrophic Lateral Sclerosis Parkinson's Disease Endotracheal Intubation Laryngeal Cancer Functional Voice Problems Stuttering Measures of Voice Quality Need for Objective Measures of Voice Quality Aging EGG and Vocal Disorders EGG and Spasmodic Dysphonia EGG and Parkinson's Disease 8. The Articulatory System. Articulators of the Vocal Tract Oral Cavity Lips Teeth Dental Occlusion Hard Palate Soft Palate Muscles of the Velum Velopharyngeal Closure Tongue Muscles of the Tongue Tongue Movements for Speech Pharynx Muscles of the Pharynx Nasal Cavities Valves of the Vocal Tract Traditional Classification System of Consonants and Vowels Place of Articulation of English Consonants Manner of Articulation of English Consonants Stops Fricatives Affricates Nasals Glides Liquids Voicing Vowel Classification Vocal Tract Resonance Characteristics of the Vocal Tract Resonator Vocal Tract Filtering of the Glottal Sound Wave Source-filter Theory of Vowel Production Formant Frequencies Related to Oral and Pharyngeal Volumes Vowel Formant Frequencies F1/F2 Plots Spectrographic Analysis of Sounds Vowels Diphthongs Glides Liquids Stops Fricatives Affricates Nasals The Production of Speech Sounds in Context Coarticulation Suprasegmentals Intonation Stress Duration 9. Clinical Application: Breakdowns in Production of Vowels and Consonants. Source-filter Theory and Problems in Speech Production Dysarthria Vowel Duration Measurements Vowel Formant Measurements Consonant Measures Hearing Impairment Segmental Problems Suprasegmental Problems Instrumentation in Treatment Programs for Deaf Speakers Palatometry and Glossometry Phonological Disorders Tracheotomy Cleft Palate 10. The Auditory System. Parts of the Ear Outer Ear Tympanic Membrane Middle Ear Ossicles Muscles Auditory Tube Functions of the Middle Ear Inner Ear Cochlea Basilar Membrane Cochlear Function Perception of Speech Segmentation Problem Instrumental Analysis of Vowel and Consonant Perception Perception of Vowels and Diphthongs Vowels Diphthongs Perception of Consonants Categorical Perception Multiple Acoustic Cues in Consonant Perception Influence of Coarticulation Liquids Glides Nasals Stops Fricatives Affricates The Role of Context in Speech Perception Immittance Audiometry, Otoacoustic Emissions, and Cochlear Implants Immittance Audiometry Tympanograms Tympanometric Procedure Tympanogram Shapes Advantages of Tympanometry Otoacoustic Emissions Spontaneous and Evoked Otoacoustic Emissions Cochlear Implants 11. Clinical Application: Perceptual Problems in Hearing Impairment, Language and Reading Disability, and Articulation Deficits. Hearing Loss Vowel Perception Consonant Perception Cochlear Implants Otitis Media Language and Reading Disability Articulatory Problems 12. The Nervous System. Brain tissue Glial cells Neurons Types of neurons Sensory receptors Neuronal function Conduction velocity Functional anatomy of the nervous system Central nervous system Meninges Ventricles Overview of functional brain anatomy Cortex Lobes of the brain Frontal lobe Parietal lobes Temporal lobes Occipital lobe Limbic lobe Cortical connections Commissural fibers Association fibers Projection fibers Subcortical areas of the brain Basal nuclei Thalamus Hypothalamus Brainstem Midbrain Pons Medulla Cerebellum Spinal cord Cranial nerves Blood supply to the brain Motor control systems involved in speech production Motor cortex Upper and lower motor neurons Direct and indirect systems Motor units Principles of motor control Feedback and feedforward Efference copy 13. Clinical Application of Brain Function Measures. Techniques for imaging brain structure Computerized tomography Magnetic resonance imaging Techniques for imaging brain function Functional magnetic resonance imaging Positron emission tomography Single photon emission computed tomography Electroencephalography and evoked potentials Use of brain imaging techniques in communication disorders Stuttering Parkinson's disease Multiple sclerosis Alzheimer's disease 14. Models and Theories of Speech Production and Perception. Theories Models Speech Production The Serial-order issue Degrees of Freedom Context-sensitivity Problem Theories of Speech Production Target Models Feedback and Feedforward Models Dynamic Systems Models Connectionist Models Speech Perception Linearity and Segmentation Speaker Normalization Basic Unit of Perception Specialization of Speech Perception Categories of Speech Perception Theories Active versus Passive Bottom-up versus Top-down Autonomous versus Interactive Theories of Speech Perception Motor Theory Acoustic Invariance Theory Direct Realism TRACE Model Logogen Theory Cohort Theory Fuzzy Logical Model of Perception Native Language Magnet Theory Glossary Appendix IPA Symbols for Consonants and Vowels References Index

  • Research Article
  • Cite Count Icon 40
  • 10.1016/j.neuroimage.2008.02.046
Motor speech perception modulates the cortical language areas
  • Mar 6, 2008
  • NeuroImage
  • Julius Fridriksson + 5 more

Motor speech perception modulates the cortical language areas

  • Research Article
  • Cite Count Icon 3
  • 10.5144/0256-4947.1997.533
Cochlear Implantation in Deaf Children
  • Sep 1, 1997
  • Annals of Saudi Medicine
  • Mohammad J.A Makhdoum + 2 more

A cochlear implant (CI) is a hearing device introduced in the 1980s for profoundly deaf subjects who gained little or no benefit from powerful hearing aids. This device comprises an electrode array inserted in the cochlea, connected to an internal receiver, and an externally worn speech processor. The CI transforms acoustic signals into electrical currents which directly stimulate the auditory nerve. Since the early 1990s, cochlear implantation in children has been developing rapidly. Although it is still difficult to predict how a child will perform with a cochlear implant, the success of cochlear implantation can no longer be denied. In this paper, some recent papers and reports, and the results of the various Nijmegen cochlear implant studies, are reviewed. Issues about selection, examinations, surgery and the outcome are discussed. Overall, our results were comparable with those of other authors. It can be concluded that cochlear implantation is an effective treatment for postlingually deaf as well as prelingually (congenital or acquired) deaf children with profound bilateral sensorineural deafness.

  • Book Chapter
  • Cite Count Icon 4
  • 10.4324/9781315110622-9
Speech Perception and Discrimination
  • May 1, 2019
  • Caroline Junge + 2 more

Most children listen to speech as their primary source of communication. Yet which language they learn depends on where they grow up (Cutler, 2012): for instance, babies growing up in the Netherlands will grow up speaking Dutch while those growing up in China might master Mandarin. All these languages are markedly different: they differ in the repertoire of speech sounds, in the use of suprasegmental information (whether acoustic variations above the level of phonetics/phonemes can differentiate between words, such as word stress and tone), and in phonotactic information (which phoneme combinations are permissible in a language or not). When infants are born, their natural preferences and abilities in speech perception are hardly shaped by their native language. In other words, newborns are considered 'universal listeners' (Kuhl et al., 2008). Yet through repeated exposure to their native language, cross-linguistic differences between infants soon become apparent, which suggests that the first year of life sets the scene for language-specific listening. This chapter makes clear that speech perception is not a trivial task: speech is like a stream of sounds embedded in words combining into phrases, with no pauses that reliably signal where words begin or end (See Figure 1). Fortunately, speech contains and conveys cues to many linguistic elements simultaneously. Speech perception is the process of extracting cues from the speech stream, to recognise the message that a speaker is conveying. This process is further complicated by the fact that all speakers are different and therefore produce the cues slightly differently. In what follows next, we will first describe the input that children are exposed to (Section 15.1), before we turn to how children learn to recognise their native language from other acoustic signals (Section 15.2) and to decompose into meaningful units: into sounds (Section 15.3), into suprasegmental units (Section 15.4), and finally, into words (Section 15.5). In Section 15.6 we discuss the development of speech perception in relation to speech exposure and brain maturation. In our concluding section (Section 15.6) we underscore the relevance of early speech perception skill as crucial for language acquisition. 15.1 What kind of speech do children hear? The primary source of speech is vocal fold vibration, resulting in voiced sounds with a fundamental frequency, which is perceived as pitch. Speech can also vary in amplitude, 1 1 = Utrecht University, 2 = University of Postdam, 3 = Macquarrie University 2 perceived as fluctuations in loudness, and in the duration of segments and phrases, which may signal speaking rate, among many other things. Figure 1: A plot to indicate that speech is a stream of sounds: a sound spectrogram of a Dutch utterance 'in speech, all words are glued together', with time on the x-axis, spectral frequencies from 0 -8000 Hz (signaling vowel and consonant properties) on the left y-axis, and fundamental frequencies (pitch information signaling intonation and word stress by means of the rising and falling line in the spectrogram) on the right y-axis. The tiers below the spectrogram show the speech stream segmented into relevant subunits of speech: the top tier shows how harmonics cluster to correspond to specific speech sounds segments. The second tier groups these segments into Dutch words. Note that pauses in the speech signal the onset of plosives (/k/, /p/, /t/); they do not align with the onset or offset of words. The third tier offers a translation of the words into English. Frequency, amplitude and duration are essential cues to suprasegmental structure, such as lexical stress, tone, and intonation of an entire utterance (Fry, 1955; Pierrehumbert, 1980; Beckman, 1986). To give an example from Figure 1: the second vowel in the Dutch word /əәlkaːr/ is typically higher, louder, and longer than the first, because the second but not the first syllable carries lexical stress. These suprasegmental cues also play a role in distinguishing between the two main segmental classes, as vowels are typically voiced, and louder and longer than consonants.

  • Research Article
  • 10.1044/2024_jslhr-22-00531
Perception of Voicing and Aspiration in Hindi, American English, and Tamil Listeners in Quiet and in Background Noise.
  • May 15, 2024
  • Journal of speech, language, and hearing research : JSLHR
  • Reethee Antony + 3 more

There is a dearth of literature in determining whether language groups for whom aspiration and/or voicing is phonologically contrastive show better perception relative to those who do not use these features contrastively and whether the cue type modulates perception in noise. This study addresses perception of laryngeal cues (voicing and aspiration) by Hindi, English, and Tamil listeners, in quiet and in noise. Sixteen participants between 20 and 45 years of age were included in each of the three language groups. The stimuli were bilabial stops that contrasted phonetically in voicing and aspiration, voicing-lead [ba], short-lag [pa], and long-lag aspirated [pha], with one set corresponding to the Hindi phonemes /ba/, /pa/, and /pha/ and the second set to the English phonemes /ba/ and /pa/ (which are phonetically [pa] and [pha], respectively). Tamil includes only the short-lag [pa] as a bilabial stop consonant. The stimuli were presented at 70 dB SPL, in quiet and in speech-shaped noise at a signal-to-noise ratio of 0. Participants performed two speech identification tasks and a speech discrimination task. Patterns of perceptual assimilation related to the first language were observed in all three language groups, and accuracy was generally higher in quiet than in noise. Hindi participants identified the English /pa/ as Hindi /pha/ and English /ba/ as Hindi /pa/. The American English participants identified Hindi /pha/ as English /pa/ and both the Hindi /pa/ and Hindi /ba/ as English /ba/. In contrast, Tamil listeners generally perceived both Hindi and English bilabial stops as one category, regardless of voicing and aspiration. English and Hindi participants generally showed higher accuracy for native language stimuli. Patterns of assimilation in quiet and noise differed across language groups for each stimulus type. The aspirated stimuli were most likely to be misperceived in noise by all groups (often as /ha/). The results serve as evidence that listeners accurately access native language speech cues, even in noise. The results contribute toward a better understanding of cross-linguistic speech processing in noise.

  • Research Article
  • Cite Count Icon 1
  • 10.1023/a:1021148222052
The role of right- and left-hemispheric structures in speech and memory formation in children
  • Nov 1, 2002
  • Human Physiology
  • I P Lukashevich + 2 more

The role of structures of the left and right cerebral hemispheres in formation of speech function and memory was studied on the basis of complex examination of children with developmental speech disorders. On the basis of EEG estimation of the functional state of the brain, children were classified in two groups depending on the side of localization of changes in electrical activity: those with local changes in electrical activity in the left hemisphere (group I) and those with changes in the right hemisphere (group II). The medical history suggested that the observed features of topography of local changes in electrical activity were linked with the character of prenatal and labor complications and their consequences leading to embryo- and ontogenetic disorders in development of different brain regions. Comparison of the results of neuropsychological examination of the two groups showed that different regions of the brain cortex of both the left and right hemispheres are involved in speech formation. However, a specific role of the right hemisphere in formation and actualization of automatic speech series was revealed. It was suggested that the integrity of gnostic functions of the right hemisphere and, primarily, the spatial organization of perception and movements is a necessary factor of development of auditory–speech and nominative memory.

  • Research Article
  • Cite Count Icon 3
  • 10.5070/p75qr980vk
Frequency Effects in Cross-linguistic Stop Place Perception: A Case of /t/-/k/ in Japanese and English
  • Jan 1, 2007
  • UC Berkeley Phonology Lab Annual Reports
  • Reiko Kataoka + 1 more

UC Berkeley Phonology Lab Annual Report (2007) Frequency Effects in Cross-Linguistic Stop Place Perception: A Case of /t/ - /k/ in Japanese and English Reiko Kataoka and Keith Johnson 1. Introduction The study described in this paper is an attempt to answer the question ‘Which aspect of speech perception is altered by linguistic experience and how this alteration is done?’ As Strange and Jenkins stated, “[t]he knowledge of a language possessed by a normal adult is a product of many years of exposure to a specific language environment” (1978: 125). Therefore, if linguistic knowledge influences speech perception, then adult listeners would perceive speech sounds in language-specific ways. Evidence of experience-based speech perception has been accumulated from both cross-linguistic and within language studies. One source of evidence is the phenomena called categorical perception (Liberman, et al. 1957, 1961a, 1961b), which is characterized by difficulty in discriminating acoustically similar patterns in the single phonemic category and near perfect discrimination for acoustic patterns that straddle different categories. Categorical perception has been replicated with listeners of various language groups, and it seems to be an undeniable fact that linguistic experience influences listeners’ ability to discriminate speech stimuli. However, exactly which aspect of linguistic experience is responsible for language-specific speech perception is not well understood. This is the primary question addressed in this paper. Also, it is often reported that listeners react to speech sounds differently depending on whether they use a continuous mode or a categorical mode of memory (see, for example, Pisoni 1973). Thus, it is of interest to test whether experience-based perception is elicited when the listening task calls for auditory acoustic perception. The goal of this study is to obtain answers of these two questions. The first question was treated by testing whether a particular aspect of linguistic structure—phoneme frequency— influences speech perception; and the second question was treated by testing whether the linguistic knowledge has any effect in an auditory acoustic perception task. The rest of the paper is organized as follows. Section 2 provides reviews of previous studies on the experience-based auditory perception, and gives rationales for the particular ways current experiment is designed. Section 3 describes the experimental study, provides the results, and discusses the results. Finally, Section 4 discusses the implications of some of the findings from current study for theories of speech perception. 2. Background 2.1. Experience-based speech perception One of the prime examples of experience-based speech perception is categorical perception, a phenomena originally defined by Liberman, Harris, Hoffman, and Griffith (1957). Categorical perception can be demonstrated in an experiment that uses a series of synthetic consonant-vowel stimuli, ranging across two or more initial consonant categories (e.g. ten-equal- step /da/-/ga/ continuum), and involves identification and discrimination tasks. One of the defining characteristics of categorical perception is predictability of discrimination accuracy from the identification result. In its strongest form, categorical perception predicts that listeners

  • Research Article
  • Cite Count Icon 201
  • 10.1097/01.wco.0000168081.76859.c1
The latest on functional imaging studies of aphasic stroke
  • Aug 1, 2005
  • Current Opinion in Neurology
  • Cathy J Price + 1 more

Functional neuro-imaging studies of aphasic stroke offer the potential for a better understanding of the neuronal mechanisms that sustain language recovery. Conclusions, however, have been hampered by a set of unexpected challenges related to experimental design and interpretation. In this review of studies published between January 2004 and February 2005, we discuss imaging studies of speech production and comprehension in patients with aphasia after left hemisphere stroke. Studies of speech production suggest that recovery depends on slowly evolving activation changes in the left hemisphere. In contrast, right hemisphere activation changes have been interpreted in terms of transcallosal disinhibition that do not reflect recovery because they occur early after stroke, in areas homologous to the lesion, and do not appear to correlate with the level of recovery. There have been few studies of auditory speech comprehension, but unlike speech production, recovery of speech comprehension appears to depend on both left and right temporal lobe activation. Together, recent studies provide a deeper appreciation of how the neuronal mechanisms of recovery depend on the task, the lesion site, the time from insult and the distinction between neuronal reorganization that does and does not sustain recovery. Although many more studies of aphasic stroke are required with larger patient numbers and more focal lesion sites, we also argue that clinical diagnosis and treatment requires a better understanding of the normal variability in functional anatomy and the many neuronal pathways that are available to sustain each type of language task.

  • Research Article
  • Cite Count Icon 18
  • 10.1016/j.neuroimage.2014.08.030
Theta–gamma coupling reflects the interaction of bottom-up and top-down processes in speech perception in children
  • Aug 27, 2014
  • NeuroImage
  • Juan Wang + 5 more

Theta–gamma coupling reflects the interaction of bottom-up and top-down processes in speech perception in children

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.