Abstract

This study examined facial expressions produced during a British Sign Language (BSL) narrative task (Herman et al., International Journal of Language and Communication Disorders 49(3):343–353, 2014) by typically developing deaf children and deaf children with autism spectrum disorder. The children produced BSL versions of a video story in which two children are seen to enact a language-free scenario where one tricks the other. This task encourages elicitation of facial acts signalling intention and emotion, since the protagonists showed a range of such expressions during the events portrayed. Results showed that typically developing deaf children produced facial expressions which closely aligned with native adult signers’ BSL narrative versions of the task. Children with ASD produced fewer targeted expressions and showed qualitative differences in the facial actions that they produced.

Highlights

  • Deaf people use facial expressions while they are using sign language to express their own emotions or to describe the emotions of others, through the use of the same range of emotional facial expressions used naturally by the general population e.g. happiness, anger, sadness etc. (Carminati and Knoeferle 2013)

  • In a study of comprehension of emotional facial expressions in sign language in deaf autism spectrum disorder (ASD) and typically developing (TD) groups, we have found that the ASD group showed a deficit during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD (Denmark et al 2014)

  • No differences were found between groups for narrative content (H(1) = .132, p = .71), narrative structure (H(1) = .89, p = .018) or grammar (H(1) = 2.132, p = 1.44). This is the first study to investigate the frequency and quality of facial expressions produced during a sign language narrative in deaf children with and without ASD

Read more

Summary

Introduction

Deaf people use facial expressions while they are using sign language to express their own emotions or to describe the emotions of others, through the use of the same range of emotional facial expressions used naturally by the general population e.g. happiness, anger, sadness etc. (Carminati and Knoeferle 2013). (Carminati and Knoeferle 2013). They use facial actions which provide sign language prosody, which function in sign language like intonation in spoken languages. Prosody is carried in the vocal channel through patterns of stress, rhythm and intonation, while for deaf sign language users, prosody is conveyed while signing through an extensive range of prosodic facial acts conducted in synchrony with movements and holds produced by the hands (Dachovsky and Sandler 2009). Sign language prosody functions include lengthening effects, as well as lower face behaviours, eyeblinks and torso leans (Brentari and Crossley 2002). Certain facial actions are considered to be integral to providing phonological, lexical, syntactic and discourse features in sign language Neuropsychological studies of deaf signers with discrete right and left hemisphere lesions (Corina et al 1999; MacSweeney et al 2008) have demonstrated a dissociation between linguistic and non-linguistic uses of the face with linguistic functions being localized to the left hemisphere and affective functions being mediated by the right hemisphere

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call