Abstract

As Human-Robot Interaction (HRI) develops, robots are expected to learn more complex and demanding interaction skills. Complex HRI tasks are often embodied in robots that are jointly constrained by task space and joint space. In the field of Imitation Learning, scholars have explored the topic of joint constraints from task space and joint space, but there are few relevant studies in Human-Robot Interaction. In this paper, based on the Interaction Primitives framework (a HRI framework), we propose an interaction inference method that first generalizes the robot's movements in two spaces synchronously and then probabilistically fuses the two movements based on Bayesian estimation. This work was validated in the task that the robot follows a human handheld object, and the inference errors (RMSE and MAE) of the method are smaller in both task space and joint space than in IP using only singlespace inference.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.