Abstract

Reinforcement learning is very effective for robot learning. Because it does not need priori knowledge and has higher capability of reactive and adaptive behaviors. In our previous works, we proposed new reinforce learning algorithm: “Q-learning with Dynamic Structuring of Exploration Space Based on Genetic Algorithm (QDSEGA) ”. It is designed for complicated systems with large action-state space like a robot with many redundant degrees of freedom. However the application of QDSEGA is restricted to static systems. We extend the layered structure of QDSESA so that it could be applicable to the dynamical system. A snake-like robot has many redundant degrees of freedom and the dynamics of the system are very important to complete the locomotion task. For this task, application of usual reinforcement learning is difficult. In this paper, we extend layered structure of QDSEGA for applying real robot. We apply it to acquiring of locomotion pattern of the snake-like robot and demonstrate the validity of QDSEGA with the extended layered structure by simulation and experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call