Abstract

UCT is a standard method of Monte Carlo tree search (MCTS) algorithms, which have been applied to various domains and have achieved remarkable success. This study proposes a family of Leaf-LinUCT, which are improved LinUCT algorithms incorporating LinUCB into MCTS. LinUCB outperforms UCB1 in contextual multi-armed bandit problems, owing to a kind of online learning with ridge regression. However, due to the minimax structure of game trees, ridge regression in LinUCB does not always work well in the context of tree search. In this paper, we remedy the problem and extend our previous work on LinUCT in two ways: (1) by restricting teacher data for regression to the frontier nodes in a current search tree, and (2) by adjusting the feature vector of each internal node to the weighted mean of the feature vector of the descendant nodes. We also present a new synthetic model, incremental-random-feature tree, by extending the standard incremental random tree model. In our model, each node has a feature vector that represents the characteristics of the corresponding position. The elements of a feature vector in a node are randomly changed from those in its parent node by each move, as the heuristic score of a node is randomly changed by each move in the standard incremental random tree model. The experimental results show that our Leaf-LinUCT outperformed UCT and existing LinUCT algorithms, in the incremental-random-feature treeand a synthetic game studied in [1].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call