Abstract

In this paper, we present a method for robot to learn point-to-point motions from human demonstrations. The motion is modelled as a nonlinear dynamic system called dynamic movement primitive (DMP). The original DMP can be only used to learn from single demonstration. In order to learn from multiple demonstrations of a specific task, we combine the DMP with Gaussian mixture models (GMMs), and the nonlinear part of the DMP is learned through Gaussian mixture regression (GMR). Thus more features of the same skill can be extracted to generate a better motion, and good performance of the original DMP, e.g., the ability of generalization, spatial and temporal scaling, is inherited. A motion capture sensor is used in this work to extract human tutor's demonstrations. The effectiveness of the developed method is verified based on a virtual Baxter robot platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call