Abstract

We present a distributed Nash equilibrium seeking method based on the Bregman forward-backward splitting, which allows us to have a mirror mapping instead of the standard projection as the backward operator. Our main technical contribution is to show convergence to a Nash equilibrium when the game has cocoercive pseudogradient mapping. Furthermore, when the feasible sets of the agents are simplices, a suitable choice of a Legendre function results in an exponentiated pseudogradient method, which, in our numerical experience, performs out the standard projected pseudogradient and dual averaging methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call