Abstract
Human speech is a complex combination of sounds, auditory events. To date, there is no consensus on how speech perception occurs. Does the brain react to each sound in the flow of speech separately, or are discrete units distinguished in the sound series, analyzed by the brain as one sound event. The pilot study analyzed the responses of the human midbrain to simple tones, combinations of simple tones (“complex” sounds), and lexical stimuli. The work is a description of individual cases obtained in the frame of intraoperative monitoring during surgical treatment of tumors of deep midline tumors of the brain or brain stem. The study included local-field potentials from the midbrain in 6 patients (2 women, 4 men). The S- and E-complexes that emerge at the beginning and end of the sound, as well as the S-complexes that emerge when the structure of the sound changes, were identified. The obtained data suggest that the selected complexes are markers of the primary coding of audio information and are generated by the structures of the neural network that provides speech perception and analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.