Abstract

Abstract Speech production involves coordinated processing in many regions of the brain. To better understand these processes, our research team has designed, tested, and refined a neural network model whose components correspond to brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After learning, the model can produce combinations of the sounds it has learned by commanding movements of an articulatory synthesizer. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during speech. The model is also being used to investigate speech motor disorders, such as stuttering, apraxia of speech, and ataxic dysarthria. These projects compare the effects of damage to particular regions of the model to the kinematics, acoustics, or brain activation patterns of speakers with similar damage. Finally, insights from the model are being used to guide the design of a brain-computer interface for providing prosthetic speech to profoundly paralyzed individuals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.