Abstract

Left-hemispheric language dominance is a well-known characteristic of the human language system. However, it has been shown that leftward language lateralization decreases dramatically when people communicate using whistles. Whistled languages present a transformation of a spoken language into whistles, facilitating communication over great distances. In order to investigate the laterality of Silbo Gomero, a form of whistled Spanish, we used a vocal and a whistled dichotic listening task in a sample of 75 healthy Spanish speakers. Both individuals that were able to whistle and to understand Silbo Gomero and a non-whistling control group showed a clear right-ear advantage for vocal dichotic listening. For whistled dichotic listening, the control group did not show any hemispheric asymmetries. In contrast, the whistlers’ group showed a right-ear advantage for whistled stimuli. This right-ear advantage was, however, smaller compared to the right-ear advantage found for vocal dichotic listening. In line with a previous study on language lateralization of whistled Turkish, these findings suggest that whistled language processing is associated with a decrease in left and a relative increase in right hemispheric processing. This shows that bihemispheric processing of whistled language stimuli occurs independent of language.

Highlights

  • Both the left and the right hemispheres contribute to language processing, but they are relevant for different aspects of how language is processed

  • The auditory language comprehension model by Friederici [1] assumes that the left hemisphere is dominant for the processing of syntactic structures, semantic relations, grammatical and thematic relations, and information integration when spoken language is perceived

  • The results indicate that the temporal regions of the left hemisphere that are usually associated with spoken-language function were engaged during the processing of Silbo in experienced Silbadores

Read more

Summary

Introduction

Both the left and the right hemispheres contribute to language processing, but they are relevant for different aspects of how language is processed. The auditory language comprehension model by Friederici [1] assumes that the left hemisphere is dominant for the processing of syntactic structures, semantic relations, grammatical and thematic relations, and information integration when spoken language is perceived. The right hemisphere is dominant for the processing of prosody, intonational phrasing, and accentuation focus. This implies that if a language is processed that requires a greater amount of prosody processing to be understood correctly than spoken language, greater right-hemispheric activation should be expected. 96% of strong right-handers, 85% of ambidextrous individuals, and 83% of strong left-handers show left-hemispheric language dominance [4,5]. It has been suggested that this dominance of the left hemisphere is caused by superiority to assess fast temporal changes in auditory input, making the left hemisphere ideally suited to analyze voice onset times of different syllables [13,14,15]

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.