Abstract

Card games are regarded as an idealized model for many real-world problems for their rich hidden information and strategic decision-making process. It provides a fertile environment for artificial intelligence (AI), especially reinforcement learning algorithms. With the boom of deep neural networks, increasing breakthroughs have been made in this challenging domain. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Koi-Koi</i> is a traditional two-player imperfect-information playing card game. However, due to its unique deck and complex rules, related researches are mostly based on handcrafted features and the custom network architecture. In this paper, we design a more general AI framework, relying a Transformer encoder as the network backbone with tokenized card state input, which is trained by Monte-Carlo reinforcement learning with phased round reward. Experimental results show that our AI achieves a winning rate of 53% and +2.02 average difference point versus experienced human players in multi-round <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Koi-Koi</i> games. Moreover, with the aid of attention mechanism, we provide a novel view for analyzing the playing strategy. Such framework design can be applied to various card games. This project is available at <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><uri>https://github.com/guansanghai/KoiKoi-AI</uri></monospace> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call