Abstract

Humans tend to learn complex abstract concepts faster if examples are presented in a structured manner. For instance, when learning how to play a board game, usually one of the first concepts learned is how the game ends, i.e. the actions that lead to a terminal state (win, lose or draw). The advantage of learning end-games first is that once the actions leading to a terminal state are understood, it becomes possible to incrementally learn the consequences of actions that are further away from a terminal state – we call this an end-game-first curriculum. The state-of-the-art machine learning player for general board games, AlphaZero by Google DeepMind, does not employ a structured training curriculum. Whilst Deepmind’s approach is effective, their method for generating experiences by self-play is resource intensive, costing literally millions of dollars in computational resources. We have developed a new method called the end-game-first training curriculum, which, when applied to the self-play/experience-generation loop, reduces the required computational resources to achieve the same level of learning. Our approach improves performance by not generating experiences which are expected to be of low training value. The end-game-first curriculum enables significant savings in processing resources and is potentially applicable to other problems that can be framed in terms of a game.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.