Abstract

Socially assistive robots (SARs) have the potential to revolutionize educational experiences by providing safe, non-judgmental, and emotionally supportive environments for children's social development. The success of SARs relies on the synergy of different modalities, such as speech, gestures, and gaze, to maximize interactive experiences. This paper presents an approach for generating SAR behaviors that extend an upper ontology. The ontology may enable flexibility and scalability for adaptive behavior generation by defining key assistive intents, turn-taking, and input properties. We compare the generated behaviors with hand-coded behaviors that are validated through an experiment with young children. The results demonstrate that the automated approach covers the majority of manually developed behaviors while allowing for significant adaptations to specific circumstances. The technical framework holds the potential for broader interoperability in other assistive domains and facilitates the generation of context-dependent and socially appropriate robot behaviors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call