Abstract

Spoken word representations are hypothesized to be built from smaller segments of the speech signal, including phonemes and acoustic features. The language-level statistics of sound sequences (“phonotactics”) are speculated to play a role in integrating sub-lexical representations into words in the human brain. In four neurosurgical patients, we recorded electrocorticographic (ECoG) neural activity directly from the brain surface while they listened to spoken real and pseudo words with varying transition probabilities (TPs) between the consonants and vowels (Cs and Vs) in a set of CVC stimuli. Electrodes over left superior temporal gyrus (STG) were sensitive to TPs in a way that suggested dynamic, near real-time tracking of the speech input. TP effects were seen independently from activity explained by acoustic variability as measured by each electrode’s spectrotemporal receptive field (STRF). Furthermore, population-level analyses of STG electrodes demonstrated that TP effects were different for real vs pseudo words. These results support the hypothesis that lifelong exposure to phonetic sequences shapes the organization and synaptic weights of neural networks that process sounds in a given language, and that phonotactic information is used dynamically to integrate sub-lexical speech segments toward lexical representations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.