Abstract

As an important requirement, human-friendly motion in human–robot interaction (HRI) is increasingly attracting attention. In many scenarios, the ability to generalize a similar motion from demonstration is essential for a robot, and the similarity is generally captured by the spatial relationship between the different joints of the robot. Though a lot of investigation for motion generalization has been conducted and good progress achieved, they share limitations in leaving the relationship out of consideration and being difficult to apply the generalization between different robots. In this paper, we propose a novel topology-based motion generalization (TMG) method that abstracts the motion generalization problem to a mesh deformation optimization, and the spatial relationship between different parts of the robot is captured with a topology-based representation. Instead of only taking into account a single joint position, the relationship semantic with Laplacian coordinates is modeled, and the motion generalization from demonstration to reproduction is realized by preserving the semantic as a Laplacian deformation, and even the robot or target position is changed. Furthermore, motion generalization between single or multiple different robots can be achieved with spatial relationship preservation and transfer. Our experimental results show that the reproduction based on topology-based representation outperforms the mapping methods by training with end-effector pose or joint angles, and ensures robust motion with spatial relationship preservation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call