Abstract

The main goal of this paper is to present a novel adaptive agent-based algorithm to calculate players' mixed Nash equilibrium strategies in a normal form static game which is based on gradual learning occurring through repetitive interaction with the environment. The proposed algorithm not only calculates equilibrium states, but also models dynamic behavior of the participants. Although several algorithms have been suggested over the years to solve the problem, but many of them consider some imperfect assumptions and neglect some important Principles. The proposed algorithm covers many aspects of real game environments. The unique characteristic of the proposed method is high performance of the algorithm in games of incomplete information where players utilize limited information about each other as occurring in real games. The algorithm is tested on three classes of games with both pure and mixed Nash equilibrium. The simulation results are robust across the testing games and illustrate the efficiency of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call