Abstract
The last couple of years, we have seen a research shift from developing search techniques such as Monte-Carlo Tree Search towards the use of deep-learning models in games. The first contribution of this issue, Initial state diversification for efficient AlphaZero-style training, by Yosuke Demura and Tomoyuki Kaneko, reflects this shift in focus. The article deals with GUMBEL ALPHAZERO, a more efficient version of ALPHAZERO, which enables researchers to train agents with relatively few computational resources. The authors discuss how to further improve the playing strength of this engine under a limited amount of computational resources. Another research shift is from developing new AI engines to play games, towards developing methods that actually explain why the engine made a certain move. In the second contribution, Chess and explainable AI, Yngvi Björnsson makes the case that chess should become the drosophila of explainable AI research.
Published Version (
Free)
Join us for a 30 min session where you can share your feedback and ask us any queries you have