Abstract

Socially assistive robots are designed to help people through interactions that are inherently social, such as tutoring, coaching, and therapy. Because they operate in social environments, these robots must be programmed to recognize, process, and communicate social cues used by people. For example, non-verbal behaviors like eye gaze and gesture can provide significant communication in social interactions. However, identifying the correct non-verbal behavior to perform in a given context is a non-trivial problem for social robotics. One approach for designing robot behaviors is data driven, that is, reliant on actual observations of human behavior rather than pre-coded heuristics. This approach involves collecting data from natural human-human interactions, and then training a model based on that data. From this model, we can begin to generate non-verbal robot behaviors for known contexts, as well as identify the context given observations of new non-verbal behaviors. In this talk, I outline my current research designing data-driven generative behavior models for tutoring tasks. I also touch on the challenges of real-world robotics and how those challenges overlap with those faced by mobile augmented reality systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.