Abstract

To summarize the argument so far, our main points are (1) that the vast majority of behavioral, neuropsychological, and functional imaging data support the hypothesis that the left hemisphere is dominant for lexical and grammatical aspects of sign language perception and production, (2) that because of potential design confounds, the Neville et al. study does not present any serious challenge to existing claims concerning the lateralization of sign language, and (3) that there is evidence from both lesion and functional imaging data which suggests that the within-hemisphere organization of signed and spoken language is in many respects the same—but not in all respects.One difference (which has been overlooked thus far) in the brain regions that were activated in the processing of ASL stimuli compared with those that are activated in the processing of auditorily presented spoken language stimuli concerns the supratemporal plane, the dorsal aspect of the temporal lobe, which includes the transverse temporal (or Heschl's) gyrus and the planum temporale. This region is uniformly activated in hearing subjects listening to spoken language[17xFunctional magnetic resonance imaging of human auditory cortex. Binder, J.R. et al. Ann. Neurol. 1994; 35: 662–672CrossRef | PubMed | Scopus (299)See all References, 18xFunctional magnetic resonance imaging of the central auditory pathway following speech and pure-tone stimuli. Millen, S.J., Haughton, V.M., and Yetkin, Z. Laryngoscope. 1995; 105: 1305–1310CrossRef | PubMedSee all References, 24xFunctional MR imaging of auditorily presented words: a single-item presentation paradigm. Hickok, G. et al. Brain Lang. 1997; 58: 197–201CrossRef | PubMed | Scopus (27)See all References] but was not activated in deaf subjects watching ASL sentences in the Neville et al. study, nor was it activated in an fMRI study of single-sign perception in a native deaf signer[25xSensory mapping in a congenitally deaf subject: MEG and fMRI studies of cross-modal non-plasticity. Hickok, G. et al. Hum. Brain Mapp. 1997; 5: 437–444CrossRef | PubMed | Scopus (17)See all References][25].One potential explanation for this is that supratemporal plane structures are involved in processing non-linguistic auditory information[26xFunction of the left planum temporale in auditory and linguistic processing. Binder, J.T. et al. Brain. 1996; 119: 1239–1247CrossRef | PubMedSee all References][26]: because these are not language processing systems, perception of ASL would not be expected to activate these areas; speech stimuli on the other hand, would produce activation in supratemporal plane as a result of some type of acoustic response. Another possibility, however, is that the supratemporal plane contains systems directly and critically involved in the perception of speech (that is, extracting linguistic information from an auditory signal), as some authors have suggested (Ref. [27xLanguage-specific phoneme representations revealed by electric and magnetic brain responses. Naatanen, R. et al. Nature. 1997; 385: 432–434CrossRef | PubMed | Scopus (723)See all References][27] and D. Poeppel, PhD thesis, MIT, 1995). This hypothesis could explain the presence of supratemporal activation in auditory language perception and its absence in sign language perception. It also predicts that there should be some processing system outside of canonical language areas involved in the extraction of sign information from the visual input. On this view, there are both modality dependent and modality independent components to the neural organization of language perception. Modality dependent components are those involved in extracting linguistic information from the sensory input, modality independent components are those involved in operating on higher-level linguistic representations. Based on available data, it's possible that supratemporal plane structures are part of a modality dependent system involved in speech perception, whereas lateral temporal lobe structures are part of a modality independent system involved in higher-level linguistic operations.But all of this discussion hasn't really answered the question posed at the outset; that is, what is driving the neural organization of language? Well, we don't yet know for sure. In fact, the data reviewed above render this problem a bit more puzzling (and thus perhaps more interesting). What we do know is that modality-specific factors aren't the whole story. Save for the possibility of speech perception, the neural organization of language appears to be largely independent of the modalities through which it is perceived and produced. But notice that this conclusion rules out the most intuitive and probably the oldest answer to the above question, namely that language systems are really just dynamically organized subsystems of the particular sensory and motor channels through which language is used. Instead, the answer will have to be couched in terms that can generalize over modality. Whether such an account will ultimately appeal to genetically constrained domain-specific regional specializations or to some complex interaction of domain-general processing biases (or both) remains to be seen. Provocative issues indeed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.