Abstract

Game theory offers powerful tools for reasoning about agent behavior and incentives in multi-agent systems. Most of these reasoning tools require a game model that specifies the outcome for all possible combinations of agent behaviors in the subject environment. The requirement to describe all possible outcomes often severely limits the fidelity at which we can model agent choices, or the feasible scale in agent population. Thus, game theorists must select with extreme care the scale and detail of the system to model, balancing fidelity with tractability. This tension comes to the fore in simulation-based approaches to game modeling [3, 4], where filling in a single cell in a game matrix may require running many large-scale agent-based simulations. It is often feasible to simulate large numbers of agents interacting, but infeasible to sample all ( P+S−1 P ) combinations of strategies in a symmetric game with P players and S strategies. If the payoff matrix must be filled completely to perform analysis, this combinatorial growth severely restricts the size of simulation-based games. Our alternative approach accommodates incomplete specification of outcomes, extending the game model to a larger domain through an inductive learning process. We take as input data about outcomes from selected combinations of agent strategies in symmetric games, and learn a game model over the full joint strategy space. By doing so we can scale game modeling to a large number of agents without unduly restricting the size of strategy sets considered. Our primary aim is to identify symmetric mixed-strategy -Nash equilibria and calculate social welfare for symmetric mixed strategies. We measure the quality of approximate equilibria using regret (·), the the maximum gain any player can achieve by switching to a pure strategy. We measure the accuracy of social welfare estimates by absolute error.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.