Abstract

This paper is concerned with distributed learning and optimization in large-scale settings. The well-known fictitious play (FP) algorithm has been shown to achieve Nash equilibrium learning in certain classes of multiagent games. However, FP can be computationally difficult to implement when the number of players is large. Sampled FP (SFP) is a variant of FP that mitigates the computational difficulties arising in FP by using a Monte Carlo (i.e., sampling based) approach. Despite its computational advantages, a shortcoming of SFP is that the number of samples that must be drawn at each iteration grows without bound as the algorithm progresses. In this paper, we propose single sample FP (SSFP)—A variant of SFP in which only one sample needs to be drawn in each round of the algorithm. Convergence of SSFP to the set of Nash equilibria is proven. Simulation results show the performance of SSFP is comparable to that of SFP, despite drawing far fewer samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call