Abstract

abstractThis paper examines how gesturers and signers use their bodies to express concepts such as instrumentality and humanness. Comparing across eight sign languages (American, Japanese, German, Israeli, and Kenyan Sign Languages, Ha Noi Sign Language of Vietnam, Central Taurus Sign Language of Turkey, and Al-Sayyid Bedouin Sign Language of Israel) and the gestures of American non-signers, we find recurring patterns for naming entities in three semantic categories (tools, animals, and fruits & vegetables). These recurring patterns are captured in a classification system that identifies iconic strategies based on how the body is used together with the hands. Across all groups, tools are named withmanipulationforms, where the head and torso represent those of a human agent. Animals tend to be identified withpersonificationforms, where the body serves as a map for a comparable non-human body. Fruits & vegetables tend to be identified withobjectforms, where the hands act independently from the rest of the body to represent static features of the referent. We argue that these iconic patterns are rooted in using the body for communication, and provide a basis for understanding how meaningful communication emerges quickly in gesture and persists in emergent and established sign languages.

Highlights

  • Sign language researchers have long intuited, albeit with caution, that sign languages tend to be more similar to each other than spoken languages are known to be (Newport & Supalla, 2000; Pizzuto & Volterra, 2000; Meier, 2002; Woll, 2003; Perniss, Pfau, & Steinbach, 2007)

  • First, whether the forms produced by gesturers and signers reliably use each of the iconic strategies we have named here, second, whether our coding of their strategies can reveal important differences between gesturers and signers of different sign languages

  • Overview of the studies We examine lexical signs produced by deaf signers of eight different languages, as well as the gestures without speech produced by hearing American nonsigners during a picture task

Read more

Summary

Introduction

Sign language researchers have long intuited, albeit with caution, that sign languages tend to be more similar to each other than spoken languages are known to be (Newport & Supalla, 2000; Pizzuto & Volterra, 2000; Meier, 2002; Woll, 2003; Perniss, Pfau, & Steinbach, 2007). We propose that, when asked to identify and name pictures of entities, gesturers and signers consistently draw from the same set of strategies to form distinct iconic patterns marking different semantic categories: (i) tools are man-made artifacts, (ii) animals are natural entities that are non-human and animate, and (iii) fruits & vegetables are natural, inanimate entities.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.