Abstract

We analyze boundedly rational updating in a repeated interaction network model with binary states and actions. We decompose the updating procedure into a deterministic stationary Markov belief updating component inspired by DeGroot updating and pair it with a random probability matching strategy that assigns probabilities to the actions given the underlying boundedly rational belief. This approach allows overcoming the impediments to consensus and naive learning inherent in deterministic updating functions in coarse action environments. We show that if a sequence of growing networks satisfies vanishing influence, then the eventual consensus action equals the realized state with a probability converging to one.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.