Abstract

In this paper, we use reinforcement learning and zero-sum games to solve a Chaser-Invader game, which is actually a Markov game (MG). Different from the single agent Markov Decision Process (MDP), MG can realize the interaction of multiple agents, which is an extension of game theory to a MDP environment. This paper proposes an improved algorithm based on the classical Minimax-Q algorithm. First, in order to solve the problem where Minimax-Q algorithm can only be applied for discrete and simple environment, we use Deep Q-network instead of traditional Q-learning. Second, we propose a generalized policy iteration to solve the zero-sum game. This method makes the agent use linear programming method to solve the Nash equilibrium action at each moment. Finally, through comparative experiments, we prove that the improved algorithm can perform as well as Monte Carlo Tree Search in simple environments and better than Monte Carlo Tree Search in complex environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call