Abstract

Constructing strong AI systems for video games is difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. In spite of recent research progress in popular video game genres such as Atari 2600 console games and multiplayer online battle arena (MOBA) games, to this day strong human players can still defeat the best AI systems in adversarial video games. In this paper, we propose to use a deep Convolutional Neural Network (CNN) to select among a limited set of abstract action choices in Real-Time Strategy (RTS) games, and to utilize the remaining computation time for game tree search to improve low-level tactics. The CNN is trained by supervised learning on game states labeled by Puppet Search, a strategic search algorithm that uses action abstractions. Replacing Puppet Search by a CNN frees up time that can be used for improving units' tactical behavior while executing the strategic plan. Experiments in the μRTS game show that the combined algorithm results in higher win-rates than either of its two independent components and other state-of-the-art μRTS agents. We then present a case-study that investigates how deep Reinforcement Learning (RL) can be used in modern video games, such as Total War: Warhammer, to improve tactical multi-agent AI modules. We use popular RL algorithms such as Deep-Q Networks (DQN) and Asynchronous AdvantageActor Critic (A3C), basic network architectures and minimal hyper-parameter tuning to learn complex cooperative behaviors that defeat the highest difficulty built-in AI in mediumscale scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call