Abstract

In this paper we present an approach which given only a set of rules is able to learn to play the game of Checkers. We utilize neural networks and reinforced learning combined with Monte Carlo Tree Search and alpha-beta pruning. Any human influence or knowledge is removed by generating needed data, for training neural network, using self-play. After a certain number of finished games, we initialize the training and transfer better neural network version to next iteration. We compare different obtained versions of neural networks and their progress in playing the game of Checkers. Every new version of neural network represented a better player.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call