Abstract

Robot Learning from humans is a promising paradigm for directly transferring human skills to robots. This learning allows robots to encapsulate task constraints and motion patterns from human demonstrations as well as acquire skills that can be adapted to unseen scenarios. Even though many state-of-the-art skill-learning successes have been achieved, simultaneously addressing variability from a complex and long-horizon manipulation task and generalizing it to external uncertainty remains a challenge. This efficient skill learning has to allow for handling large-scale, high-dimensional demonstrations, adapting to environmental changes (starting, via and end points and obstacles), generalizing to task constraints (trajectory precision, stiffness), and measuring uncertainty in the reproduction. To this effect, we present a novel robot skill-learning framework called SVGP-CoGP that will implement all the aforementioned properties by encoding task variability from multiple demonstrations using Sparse Variational Gaussian Processes (SVGP) and adapting to additional constraints via a coregionalized multi-output GP (CoGP) based on SVGP. The proposed method can significantly reduce the computational complexity of model fitting by making use of the variational inference of GP models, which makes it possible for robots to learn skills from complex and long-horizon tasks. We evaluated and compared the effectiveness and strengths of our framework with existing probabilistic methods on a Kinova robot that performed emergency button-pressing tasks. The results indicated that our framework allowed the robot to learn skills from complex and long-horizon manipulation tasks that outperformed baselines both in quantitative evaluation and in an online test. Note to Practitioners–The objective of this work is to address the problem of robot-learning from complex and long-horizon manipulation tasks to allow end-users to teach robots new tasks by having them learn from human demonstrations instead of being programmed. We start with a brief historical overview of widely used methods and summarize five prominent capabilities that a skill-learning approach should have: variability, uncertainty, correlation, extrapolation, and adaptability. We then propose an entirely GP-based skill-learning framework by simultaneously addressing all those capabilities by using a sparse variational Gaussian process (SVGP) in conjunction with a coregionalized multioutput GP model. The proposed framework incorporated variational inference and kernel treatments such that the robot learned skills from large-scale demonstrations and high-dimensional trajectories. Finally, experimental evaluation and performance comparisons were performed in a real robot button-pressing task, the results of which indicated that our proposed method enables robots to achieve complex and long-horizon manipulation tasks in dynamic and unstructured environments. With the rapid development of collaborative robots in service and industry, our findings have application scenarios as diverse as robot learning from demonstration, robot skill learning, human–robot collaboration, and other complex and long-horizon manipulation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call