Abstract
Learning from demonstration (LfD) has been widely studied as a convenient method for robot learning. In the LfD paradigm for redundant manipulators, the reproduced trajectories ought to be similar to human demonstrations in both task and joint space. Despite many advancements achieved in the context of learning in task space, the solutions for generating similar reproductions in joint space are still largely open. In this paper, a novel non-parametric LfD framework for 7-DOF anthropomorphic manipulators with high computational efficiency is proposed. The proposed method leverages redundancy resolution and kernel-based approaches to formulate an efficient model characterized by a limited set of open parameters. Experiments were conducted to evaluate the performance of the proposed method and compare it with the commonly used ‘LfD+IK’ solution. The results indicated that the proposed method behaves much better in terms of the similarity between the demonstration and reproduction with high computing efficiency. As the proposed method can learn from human demonstrations effectively in both task and joint space, it has the potential to significantly enhance human–robot collaboration, streamline assembly line processes, or improving robot learning. An important future challenge will be extending the proposed method for general-purpose redundant manipulators and considering task constraints to perform complex tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.