Abstract
In this paper we study the consensus problem previously introduced in (Nourian, et.al. (2009)) by the use of Stochastic Mean Field (MF) (Nash Certainty Equivalence (NCE)) Control framework. We explicitly compute the unique solution of the MF (NCE) equation system corresponding to the large population dynamic game model. We also show that the set of the MF (NCE) control laws possesses an ɛN-Nash equilibrium property where ɛN → 0 as the population size, N, goes to infinity. These control strategies drive each agent towards track the overall population's initial state distribution in which is reached asymptotically, thus achieving mean consensus. In the MF (NCE) set-up each agent has a priori information on the initial state distribution of the overall population; relaxing this a priori information gives rise to the localized feedback MF (NCE) control laws where each agent observes a time-varying random subset of the overall population. These analyses begin to bridge between (i) the stochastic MF (NCE) control methodology with a priori information and only individual state feedback, and (ii) standard consensus algorithms which involve real time observations of other agent states. Finally, we present the centralized optimal control model of the problem and compare it with the (decentralized) MF (NCE) dynamic game model.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have