Abstract

In this study, we show that the motor control performance of a humanoid robot can be improved efficiently using its previous experiences in a Reinforcement Learning (RL) framework. RL is becoming a common approach to acquire a nonlinear optimal policy through trial and error. However, applying RL to real robot control is very difficult since it usually requires many learning trials. Such trials cannot be executed in real environments due to the limited durability of the real system. Therefore, in this study, instead of executing many learning trials, we use a recently developed RL algorithm called importance-weighted Policy Gradients with Parameter based Exploration (PGPE), with which the robot can efficiently reuse the previously sampled data to improve its policy parameters. We apply importance-weighted PGPE to CB-i, our real humanoid robot, and show that it can learn both target-reaching movement and cart-pole swing-up movements in a real environment within 10 minutes without any prior knowledge of the task or any carefully designed initial trajectory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call