Abstract

To enable situated human---robot interaction (HRI), an autonomous robot must both understand and control proxemics--the social use of space--to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and models were implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human---robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human---robot proxemics with respect to "interaction potential"--predicted automated speech and gesture recognition rates as the robot enters into and engages in face-to-face social encounters with a human user--illustrating their efficacy to support richer robot perception and autonomy in HRI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.