Abstract

We recently proposed swarm reinforcement learning methods in which multiple sets of an agent and an environment are prepared and the agents learn not only by individually performing a usual reinforcement learning method but also by exchanging information among them. Q-learning method has been used as the individual learning in the methods, and they have been applied to a problem with discrete state-action space. In the real world, however, there are many problems which are formulated as ones with continuous state-action space. This paper proposes swarm reinforcement learning methods based on an actor-critic method in order to acquire optimal policies rapidly for problems with continuous state-action space. The proposed methods are applied to a biped robot control problem, and their performance is examined through numerical experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call