Abstract

Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. We used an augmented reality (AR)-based video modeling (VM) storybook (ARVMS) to strengthen and attract the attention of children with ASD to nonverbal social cues because they have difficulty adjusting and switching their attentional focus. In this research, AR has multiple functions: it extends the social features of the story, but it also restricts attention to the most important parts of the videos.Evidence-based research shows that AR attracts the attention of children with ASD. However, few studies have combined AR with VM to train children with ASD to mimic facial expressions and emotions to improve their social skills. In addition, we used markerless natural tracking to teach the children to recognize patterns as they focused on the stable visual image printed in the storybook and then extended their attention to an animation of the story. After the three-phase (baseline, intervention, and maintenance) test data had been collected, the results showed that ARVMS intervention provided an augmented visual indicator which had effectively attracted and maintained the attention of children with ASD to nonverbal social cues and helped them better understand the facial expressions and emotions of the storybook characters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call