Abstract

The distributed computation of Nash equilibria is assuming growing relevance in engineering where such problems emerge in the context of distributed control. Accordingly, we present schemes for computing equilibria of two classes of static stochastic convex games complicated by a parametric misspecification, a natural concern in the control of large-scale networked engineered system. In both schemes, players learn the equilibrium strategy while resolving the misspecification: 1) Monotone stochastic Nash games: We present a set of coupled stochastic approximation schemes distributed across agents in which the first scheme updates each agent's strategy via a projected (stochastic) gradient step, whereas the second scheme updates every agent's belief regarding its misspecified parameter using an independently specified learning problem. We proceed to show that the produced sequences converge in an almost sure sense to the true equilibrium strategy and the true parameter, respectively. Surprisingly, convergence in the equilibrium strategy achieves the optimal rate of convergence in a mean-squared sense with a quantifiable degradation in the rate constant; 2) Stochastic Nash-Cournot games with unobservable aggregate output: We refine 1) to a Cournot setting where we assume that the tuple of strategies is unobservable while payoff functions and strategy sets are public knowledge through a common knowledge assumption. By utilizing observations of noise-corrupted prices, iterative fixed-point schemes are developed, allowing for simultaneously learning the equilibrium strategies and the misspecified parameter in an almost sure sense.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call