Abstract

The success of Monte Carlo tree search (MCTS) in many games, where αβ-based search has failed, naturally raises the question whether Monte Carlo simulations will eventually also outperform traditional game-tree search in game domains where αβ -based search is now successful. The forte of αβ-based search are highly tactical deterministic game domains with a small to moderate branching factor, where efficient yet knowledge-rich evaluation functions can be applied effectively. In this paper, we describe an MCTS-based program for playing the game Lines of Action (LOA), which is a highly tactical slow-progression game exhibiting many of the properties difficult for MCTS. The program uses an improved MCTS variant that allows it to both prove the game-theoretical value of nodes in a search tree and to focus its simulations better using domain knowledge. This results in simulations superior in both handling tactics and ensuring game progression. Using the improved MCTS variant, our program is able to outperform even the world's strongest αβ-based LOA program. This is an important milestone for MCTS because the traditional game-tree search approach has been considered to be the better suited for playing LOA.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.