Abstract

Imitation learning is one of the methods for reproducing human demonstration adaptively in robots. So far, it has been found that generalization ability of the imitation learning enables the robots to perform tasks adaptably in untrained environments. However, motion styles such as motion trajectory and the amount of force applied depend largely on the dataset of human demonstration, and settle down to an average motion style. In this study, we propose a method that adds parametric bias to the conventional imitation learning network and can add constraints to the motion style. By experiments using PR2 and the musculoskeletal humanoid MusashiLarm, we show that it is possible to perform tasks by changing its motion style as intended with constraints on joint velocity, muscle length velocity, and muscle tension.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call