Abstract

Imagine agents repeatedly playing a bimatrix game against opponents drawn from a population of assorted skill levels. This paper studies how agents strategize in such a metagame and the population distributions that result. Specifically, we investigate how an agent should adjust its strategy as it also learns to play the game, that is, as the agent improves its skills (from novice to expert) with repeated exposure to the game. To perform this task, we introduce a dynamic game-theoretic model of learning in metagames. We use it to explain the learning dynamics and character selection exhibited in data from the game Super Smash Bros. Melee. Indeed, the primary motivation behind this work is the application of game-theoretic methods in video game balancing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call