Abstract

Recently, one of the prominent applications of human-robot interaction is in assistive therapy using humanoids for children with Autism Spectrum Disorder (ASD). These robotic therapies have shown promising results in improvising new communication methodologies, improvement in motor movements, joint attention and physical behavior of the children suffering from ASD. In daily life interactions, multiple types of social cues are used. The focus of this study is on the combination of most commonly used social cues i.e., visual, speech and motion used for improvement in joint attention of autistic child. The paired stimuli combination used in this research are visual plus speech (V+S), speech plus motion (S+M), and motion plus visual (M+V). The paired stimuli of V+S was presented using robot’s blinking eyes and the speech sound “Hello! I am NAO robot”. S+M cue consisted of waving hand with speech sound “Hello! Nice to meet you” and M+V cue was presented as sitting down of the robot with colored eye cue. The aim is to measure the responses of the children towards combination of different social cues in terms of Joint Attention (JA). The duration for which the eye contact is established also gives information about the response of a child towards a certain paired stimulus. The experimentation was carried on 12 subjects over a period of 2 months. The frequency of trials was one trial per week per subject. Each of the subject participated in 8 experiments. The results show that each combination of paired stimuli introduced to the children with ASD has same effect on joint attention of the child. The average accuracy for each type of paired stimuli was 66.23%, 66.40%, and 66.95% respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call