Abstract

In game theory, the Price of Anarchy (PoA) and the Price of Stability (PoS) are excellent tools for characterizing the loss of competition’s efficiency. This paper investigates the PoA and PoS for incomplete information in Bayesian-Markov games. The development of computationally effective optimization tools is essential for the investigation of these equilibria. To attain this goal, the difficulty of computing the PoA and PoS is presented as a tractable nonlinear programming problem. We look at regularized cost-minimization issues and present a framework for building projection-gradient algorithms that have near-optimal a priori performance guarantees. For the constrained Lagrange technique, we use Tikhonov’s regularization approach, for which the original problem converges to a single solution. Then we suggest a framework for computing the difference between the ‘best equilibrium’ and the ‘optimal solution’, as well as the ratio between the ‘worst equilibrium’ and the ‘optimal solution’. To solve these issues, we use a game-theoretic method, assigning agents cost functions to maximize (PoA) or minimize (PoS) the equilibrium efficiency. After the agents’ regularized-cost functions have been established, every algorithm capable of discovering a Nash equilibrium of the system inherits a performance guarantee matching the PoA and Pos. The convergence of the suggested projection-gradient method under mild conditions is established. Finally, we consider the implications of our findings when this paradigm is applied to convex systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call