Abstract

Different types of dynamics have been studied in repeated game play, and one of them which has received much attention recently consists of those based on ``no-regret algorithms from the area of machine learning. It is known that dynamics based on generic no-regret algorithms may not converge to Nash equilibria in general, but to a larger set of outcomes, namely coarse correlated equilibria. Moreover, convergence results based on generic no-regret algorithms typically use a weaker notion of convergence: the convergence of the average plays instead of the actual plays. Some work has been done showing that when using a specific no-regret algorithm, the well-known multiplicative updates algorithm, convergence of actual plays to equilibria can be shown and better quality of outcomes can be reached for atomic congestion games and load balancing games. Are there more cases of natural no-regret dynamics that perform well in suitable classes of games in terms of convergence and quality of outcomes? We answer this question positively by showing that when each player individually employs the mirror-descent algorithm, a well-known generic no-regret algorithm, the actual plays converge quickly to equilibria in nonatomic congestion games. This gives rise to a family of algorithms, including the multiplicative updates algorithm and the gradient descent algorithm as well as many others. Furthermore, we show that our dynamics achieves good bounds on the quality of outcomes measured by two different social costs: the average individual cost and the maximum individual cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call