Abstract

The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.

Highlights

  • Once sign languages were shown to be natural, full-fledged languages, it became clear that investigating their neural substrates could provide unique evidence for how the brain is organized for human language processing

  • Deaf signers exhibited increased perceptual sensitivity compared to nonsigners. These results indicate that the early neural tuning that underlies the discrimination of language from nonlanguage information occurs for both speakers and signers, but in different cortical regions

  • It is possible that posterior superior temporal cortex performs somewhat different computations during sign versus word comprehension

Read more

Summary

Karen Emmorey *

The first 40 years of research on the neurobiology of sign languages (1960–2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15–20 years, what controversies remain unresolved, and directions for future research.

INTRODUCTION
Neurobiology of Sign Languages
THE NEUROBIOLOGY OF SIGN LANGUAGE PRODUCTION
Sign Articulation and Phonological Encoding
Lexical Production
Sentence and Phrase Production
THE NEUROBIOLOGY OF SIGN LANGUAGE COMPREHENSION
Perception of Sublexical Phonological Structure
Lexical Comprehension
Sentence Comprehension
CONCLUSION AND FUTURE DIRECTIONS
Spectrotemporal Modulation Provides a Unifying Framework for Auditory
Cerebellar Activation Patterns in Deaf Participants for Perception of Sign
Associated with Sign Production Using Word and Picture Inputs in Deaf
Encoding of Manual Articulatory and Linguistic Features in American Sign
Findings
Theoretical Analysis of Functional Network for Comprehension of Sign
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call