Abstract

Robots are becoming a part of humans' social life as assistants, companionbots, therapists, and entertainers. One promising application of the socially assistive robots is in autism therapy, where robots are employed to enhance verbal and nonverbal skills (e.g. eye-gaze attention, facial expression mimicry) of individuals with Autism Spectrum Disorder (ASD). One important question is How the gaze responses of individuals with ASD differ from that of Typically Developing (TD) peers when interacting with a robot? We present the results of our recent studies for modeling and analyzing the gaze pattern of children with ASD when they interact with a robot called NAO. This paper reports the differences of gaze responses of TD and ASD group in two conversational contexts: Speaking versus Listening. We used Variable-order Markov Model (VMM) to discover the temporal gaze directional patterns of ASD and TD groups. The results reveal that the gaze responses of the TD individuals in speaking and listening contexts, can be best modeled by VMM with order zero and three, respectively. As we expected, the result show that the temporal gaze patterns of typically developed children are varying when the role in the conversational context is changed. However for the ASD individuals for both conversational contexts the VMM with order one could best fit the data. Overall, the results conclude that VMM is a powerful technique to model different gaze responses of TD and ASD individuals in speaking and listening contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call