Abstract

Spoken and signed languages (SL) deliver perceptual cues which exhibit various degrees of perceptual validity during categorization: In spoken languages, listeners develop perceptual biases when integrating multiple acoustic dimensions during auditory categorization (Holt & Lotto, 2006). This leads us to expect differential perceptual validity for dynamic gestural units HANDSHAPE, MOVEMENT, ORIENTATION, and LOCATION produced by manual articulators in SLs. In this study, we use a closed-set sentence discrimination task developed by Bochner et al. (2011) to evaluate the perceptual saliency of the gestural components of signs in American Sign Language (ASL) for naïve signers and deaf L2 learners of ASL proficient in another SL. Our goal is to gauge which of these features are likely to present the phonetic basis of sonority in sign modality and relay phonemic contrasts perceptible for even first-time signers.25 deaf L2 ASL signers and 28 hearing English speakers with no experience in any SL participated in this study. Results reveal that phonemic contrasts based on HANDSHAPE presented an area of maximum difficulty in phonological discrimination for sign-naïve participants. For all participants, contrasts based on ORIENTATION and LOCATION and involving larger scale articulators, were associated with robust categorical discrimination.

Highlights

  • Natural languages, spoken and signed, deliver perceptual cues which exhibit various degrees of perceptual validity in categorization

  • Despite this radical difference in modalities, compelling evidence supporting a unification account for spoken and signed language phonologies comes from an observation that articulatory features which spoken and signed languages deploy as markers of phonological contrasts are present in pre-linguistic infants’ babbles regardless of their hearing ability

  • 3.1 Method Using a closed-set sentence discrimination task developed by Bochner et al (2011), we evaluated relative perceptual salience of the articulatory features in American Sign Language (ASL) as proxied by the rate of successful discrimination of ASL sentence pairs which differed in terms of one aspect of the visuo-spatial configuration; we tested the ability of hearing English speakers with no experience in any sign language to detect phonological contrasts encoded by ASL gestural components HS, ORI, MOV, and LOC

Read more

Summary

Introduction

Spoken and signed, deliver perceptual cues which exhibit various degrees of perceptual validity in categorization. Despite this radical difference in modalities, compelling evidence supporting a unification account for spoken and signed language phonologies comes from an observation that articulatory features which spoken and signed languages deploy as markers of phonological contrasts are present in pre-linguistic infants’ babbles regardless of their hearing ability Human infants babble both vocally and manually and differ only in terms of the dominant babbling modality (Petitto & Marentette 1991). Gestural competence of non-signers has been established in the prior literature on sign perception as having in its foundation experience producing and perceiving communicative gestures and having a sense of how gestures function, with and without speech (see, e.g., Brentari, 2010, Bochner et al 2011) This suggests that humans may be receptive to the phonological features in the sign domain irrespective of previous sign exposure. Must be more readily available for perceptually salient articulatory features of signs

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call