Abstract

Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.

Highlights

  • Introductions to signed and spoken languages typically mention the radical difference in production and perception between the two language modalities, assigning a main and different articulatory organ in each case: Spoken languages are produced by the vocal tract and perceived by ear; signed languages are produced manually and perceived by eye

  • Mouth actions, including mouthings derived from the surrounding spoken language (e.g., silent articulation of the English word “apple” while producing the British Sign Language (BSL) sign APPLE1 manually) occur in semantic and temporal relationship with corresponding manual productions (Bank, Crasborn, & van Hout, 2011; Sutton-Spence & Day, 2001)

  • We further investigate the mutual interaction of the two channels in each language: does incongruent mouthing disrupt sign comprehension to the same extent as an incongruent manual form disrupts mouthing comprehension (Experiment 2); does incongruent gesture disrupt speech comprehension to the same extent as incongruent speech disrupts gesture comprehension (Experiment 4)? we test whether the effects are modulated by language proficiency (Experiments 2 and 4) and, for the BSL experiments, hearing status (Experiments 1 and 2)

Read more

Summary

Introduction

Introductions to signed and spoken languages typically mention the radical difference in production and perception between the two language modalities, assigning a main and different articulatory organ in each case: Spoken languages are produced by the vocal tract and perceived by ear; signed languages are produced manually and perceived by eye. Facial movements, for example, brow movements, are closely coordinated with speech, especially with prosodic cues marking focus and prominence (Krahmer & Swerts, 2004), and the visible movements of the mouth are necessarily time locked with the phonetic articulation of speech. Mouth actions, including mouthings derived from the surrounding spoken language (e.g., silent articulation of the English word “apple” while producing the British Sign Language (BSL) sign APPLE1 manually) occur in semantic and temporal relationship with corresponding manual productions (Bank, Crasborn, & van Hout, 2011; Sutton-Spence & Day, 2001). Other cues on the face, for example, raised or furrowed brows, mark grammatical information related to sentence structure and type, including topicalization and question marking (Liddell, 1980; Sutton-Spence & Woll, 1999), with scope indicated by clearly timed onsets and offsets (Pyers & Emmorey, 2008)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call