Learning from demonstration (LfD) has been widely studied as a convenient method for robot learning. In the LfD paradigm for redundant manipulators, the reproduced trajectories ought to be similar to human demonstrations in both task and joint space. Despite many advancements achieved in the context of learning in task space, the solutions for generating similar reproductions in joint space are still largely open. In this paper, a novel non-parametric LfD framework for 7-DOF anthropomorphic manipulators with high computational efficiency is proposed. The proposed method leverages redundancy resolution and kernel-based approaches to formulate an efficient model characterized by a limited set of open parameters. Experiments were conducted to evaluate the performance of the proposed method and compare it with the commonly used ‘LfD+IK’ solution. The results indicated that the proposed method behaves much better in terms of the similarity between the demonstration and reproduction with high computing efficiency. As the proposed method can learn from human demonstrations effectively in both task and joint space, it has the potential to significantly enhance human–robot collaboration, streamline assembly line processes, or improving robot learning. An important future challenge will be extending the proposed method for general-purpose redundant manipulators and considering task constraints to perform complex tasks.
Read full abstract