Abstract
In “Nonasymptotic Analysis of Monte Carlo Tree Search,” D. Shah, Q. Xie, and Z. Xu consider the popular tree-based search strategy, the Monte Carlo Tree Search (MCTS), in the context of the infinite-horizon discounted Markov decision process. They show that MCTS with an appropriate polynomial rather than logarithmic bonus term indeed leads to the desired convergence property. The authors derive the results by establishing a polynomial concentration property of regret for a class of nonstationary multiarm bandits. Furthermore, using this as a building block, they demonstrate that MCTS, combined with nearest neighbor supervised learning, acts as a “policy improvement” operator that can iteratively improve value function approximation.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have