Abstract

The strength of a program for playing an adversary game like chess or checkers is greatly influenced by how selectively it explores the various branches of the game tree. Typically, some branch paths are discontinued early while others are explored more deeply. Finding the best set of parameters to control these extensions is a difficult, time-consuming, and tedious task. In this paper we describe a method for automatically tuning search-extension parameters in adversary search. Based on the new method, two learning variants are introduced: one for offline learning and the other for online learning. The two approaches are compared and experimental results provided in the domain of chess.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call