Abstract

Trajectory learning and generation from demonstration have been widely discussed in recent years, with promising progress made. Existing approaches, including the Gaussian Mixture Model (GMM), affine functions and Dynamic Movement Primitives (DMPs) have proven their applicability to learning the features and styles of existing trajectories and generating similar trajectories that can adapt to different dynamic situations. However, in many applications, such as grasping an object, shooting a ball, etc., different goals require trajectories of different styles. An issue that must be resolved is how to reproduce a trajectory with a suitable style. In this paper, we propose a style-adaptive trajectory generation approach based on DMPs, by which the style of the reproduced trajectories can change smoothly as the new goal changes. The proposed approach first adopts a Point Distribution Model (PDM) to get the principal trajectories for different styles, then learns the model of each principal trajectory independently using DMPs, and finally adapts the parameters of the trajectory model smoothly according to the new goal using an adaptive goal-to-style mechanism. This paper further discusses the application of the approach on small-sized robots for an adaptive shooting task and on a humanoid robot arm to generate motions for table tennis-playing with different styles.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.