Abstract

Monte Carlo tree search (MCTS) is a sampling-based search algorithm that is state of the art in a variety of games. In many domains, its Monte Carlo rollouts of entire games give it a strategic advantage over traditional depth-limited minimax search with αβ pruning. These rollouts can often detect long-term consequences of moves, freeing the programmer from having to capture these consequences in a heuristic evaluation function. But due to its highly selective tree, MCTS runs a higher risk than full-width minimax search of missing individual moves and falling into traps in tactical situations. This paper proposes MCTS-minimax hybrids that integrate shallow minimax searches into the MCTS framework. Three approaches are outlined, using minimax in the selection/expansion phase, the rollout phase, and the backpropagation phase of MCTS. Without assuming domain knowledge in the form of evaluation functions, these hybrid algorithms are a first step towards combining the strategic strength of MCTS and the tactical strength of minimax. We investigate their effectiveness in the test domains of Connect-4, Breakthrough, Othello, and Catch the Lion, and relate this performance to the tacticality of the domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call