Abstract

Multi-agent reinforcement learning (MARL) is one of the most promising methods for solving the problem of multi-robot control. One approach for MARL is cooperative Q-learning (CoQ), which uses learning state space containing states and actions of all agents. Inspite of the mathematical foundation for learning convergence, CoQ often suffers from a state space explosion caused by the increase in the number of agents. Another approach to MARL is distributed Q-learning (DiQ), in which each agent uses learning state space not containing the states and actions of other agents. The state space for DiQ can easily be kept compact. Therefore, DiQ seems suitable for solving multi-robot control problems. However, there is no mathematical guarantee for learning convergence in DiQ and it is difficult to apply DiQ to a multi-robot control problem in which definite appointments among working robots must be considered for accomplishing a mission. To solve these problems in applying DiQ for multi-robot control, we treat the work operated by robots as a new agent that regulates robots' motion. We assume that the work has braking ability for its motion. The work stops its motion when the robot attempts to push the work in an inappropriate direction. The policy for the work braking is obtained via dynamic programming of a Markov decision process by using a map of the environment and the work's geometry. By virtue of this, DiQ without joint state space shows convergence. Simulation results also show the high performance of the proposed method in learning speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call