Abstract

Socially assistive robots (SARs) have the potential to revolutionize educational experiences by providing safe, non-judgmental, and emotionally supportive environments for children's social development. The success of SARs relies on the synergy of different modalities, such as speech, gestures, and gaze, to maximize interactive experiences. This paper presents an approach for generating SAR behaviors that extend an upper ontology. The ontology may enable flexibility and scalability for adaptive behavior generation by defining key assistive intents, turn-taking, and input properties. We compare the generated behaviors with hand-coded behaviors that are validated through an experiment with young children. The results demonstrate that the automated approach covers the majority of manually developed behaviors while allowing for significant adaptations to specific circumstances. The technical framework holds the potential for broader interoperability in other assistive domains and facilitates the generation of context-dependent and socially appropriate robot behaviors.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.