Abstract

In this work, we investigate a class of stochastic aggregative games, where each player has an expectation-valued objective function depending on its local strategy and the aggregate of all players’ strategies. We propose a distributed algorithm with operator extrapolation to search for the Nash equilibrium, in which each player maintains an estimate of this aggregate by exchanging this information with its neighbors over a time-varying network, and updates its equilibrium estimate through the mirror descent method. Specially, we embed an operator extrapolation utilizing two step historical gradient samples at the search direction so as to accelerate the convergence. Under the generalized strongly monotone assumption characterized by the Bregman’s distance on the pseudo-gradient mapping, we prove that the proposed algorithm can achieve the optimal convergence rate O(1/k) in terms of the expected Bregman distance to the Nash equilibrium. Finally, the algorithm performance is demonstrated via numerical simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call