Abstract

AbstractRecently, Jadbabaie et al. presented a social learning model, where agents update beliefs by combining Bayesian posterior beliefs based on personal observations and weighted averages of the beliefs of neighbors. For a network with a fixed topology, they provided sufficient conditions for all of the agents in the network to learn the true state almost surely. In this paper, we extend the model to networks with time‐varying topologies. Under certain assumptions on weights and connectivity, we prove that agents eventually have correct forecasts for upcoming signals and all the beliefs of agents reach a consensus. In addition, if there is no state that is observationally equivalent to the true state from the point of view of all agents, we show that the consensus belief of agents eventually reflects the true state.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.