Abstract

Real-Time Strategy (RTS) games pose a big challenge due their large branching factor and real-time nature. This challenge is even bigger if we consider partially observable RTS games due to the fog-of-war. This paper focuses on extending Monte Carlo Tree Search (MCTS) algorithms for RTS games to consider partially observable settings. Specifically, we investigate sampling a single believe state consistent with a perfect memory of all the past observations in the current game, and using it to perform MCTS. We evaluate the performance of this approach in the μRTS game simulator, showing that the performance of this approach is only between 8%–15% lower than if we could observe the entire game state (e.g., by cheating).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call