Abstract

Programming by Demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such a demonstrated task can be represented as a sequence of so-called actions or skills. This work focuses on the recognition part of the task transfer. We propose a framework that recognizes skills online during a kinesthetic demonstration by means of position and force–torque (wrench) sensing. Therefore, our framework works independently of visual perception. The recognized skill sequence constitutes a task representation that lets the user intuitively understand what the robot has learned. The skill recognition algorithm combines symbolic skill segmentation, which makes use of pre- and post-conditions, and data-driven prediction, which uses support vector machines for skill classification. This combines the advantages of both techniques, which is inexpensive evaluation of symbols and usage of data-driven classification of complex observations. The framework is thus able to detect a larger variety of skills, such as manipulation and force-based skills that can be used in assembly tasks. The applicability of our framework is proven in a user study that achieves a 96% accuracy in the online skill recognition capabilities and highlights the benefits of the generated task representation in comparison to a baseline representation. The results show that the task load could be reduced, trust and explainability could be increased, and, that the users were able to debug the robot program using the generated task representation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call