Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Highlights
Once sign languages were shown to be natural, full-fledged languages, it became clear that investigating their neural substrates could provide unique evidence for how the brain is organized for human language processing
Deaf signers exhibited increased perceptual sensitivity compared to nonsigners. These results indicate that the early neural tuning that underlies the discrimination of language from nonlanguage information occurs for both speakers and signers, but in different cortical regions
It is possible that posterior superior temporal cortex performs somewhat different computations during sign versus word comprehension
Summary
The first 40 years of research on the neurobiology of sign languages (1960–2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15–20 years, what controversies remain unresolved, and directions for future research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have