Abstract

This article examines the famous distributed algorithm: try-again-till-you're-satisfied in opinion formation game. It illustrates that a simple learning algorithm which consists to react only when unsatisfied through on/off observation can provide a satisfactory solution. Learning takes place during the interactions of the game, in which the agents have no direct knowledge of the payoff model. Each agent is allowed to observe their own satisfaction/dissatisfaction state and has only one-step memory. The existing results linking the outcomes to stationary satisfactory set do not apply to this situation because of continuous action space. We provide a direct proof of convergence of the scheme for arbitrary initial conditions and arbitrary number of agents. As the number of iterations grows, we show that there is an emergence of a consensus in terms of opinion distribution of satisfied agents. A similar result holds for the mean-field opinion formation game.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.