Abstract

Autism spectrum disorder involves persistent difficulties in social communication. Although these difficulties affect both verbal and nonverbal communication, there are no quantitative behavioral studies to date investigating the cross-modal coordination of verbal and nonverbal communication in autism. The objective of the present study was to characterize the dynamic relation between speech production and facial expression in children with autism and to establish how face-directed gaze modulates this cross-modal coordination. In a dynamic mimicry task, experiment participants watched and repeated neutral and emotional spoken sentences with accompanying facial expressions. Analysis of audio and motion capture data quantified cross-modal coordination between simultaneous speech production and facial expression. Whereas neurotypical children produced emotional sentences with strong cross-modal coordination and produced neutral sentences with weak cross-modal coordination, autistic children produced similar levels of cross-modal coordination for both neutral and emotional sentences. An eyetracking analysis revealed that cross-modal coordination of speech production and facial expression was greater when the neurotypical child spent more time looking at the face, but weaker when the autistic child spent more time looking at the face. In sum, social communication difficulties in autism spectrum disorder may involve deficits in cross-modal coordination. This finding may inform how autistic individuals are perceived in their daily conversations.

Highlights

  • Prior work has shown that emotional speech productions of children with ASD are rated as more emotionally intense, and as more awkward than those of their NT peers[3,7]

  • Whereas previous studies use Granger causality to establish whether an individual coordinates the expressions of different facial regions, the present study proposes a new application of this method, namely, to establish whether an individual coordinates facial expression with speech production

  • Consistent with the main effect for sentence, we found that mean cross-modal coordination differed significantly between neutral and emotional sentences, with emotional sentences displaying stronger cross-modal coordination than neutral sentences (for children with ASD: t(3.0×105) = 67.92, p < 1×10−16; for NT children: t(3.0×105) = 169.60, p < 1×10−16)

Read more

Summary

Introduction

Prior work has shown that emotional speech productions of children with ASD are rated as more emotionally intense, and as more awkward than those of their NT peers[3,7]. When individuals with ASD tell a narrative, their gestures are less coordinated with the timing of speech production than those of NT individuals[30] Despite this evidence of receptive difficulties in cross-modal integration, there have been no quantitative behavioral studies to date on how autistic individuals coordinate vocal and facial expressions during speech production and whether the presence of atypical face-directed gaze is related to any differences in vocal and facial expression quality or coordination. Such information is crucial to understanding the relation between receptive and expressive social communication skills and crucial to documenting possible reasons for the perceived awkwardness of facial and vocal expressions of autistic individuals[4]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call