Abstract

We analyze boundedly rational learning in social networks within binary action environments. We establish how learning outcomes depend on the environment (i.e., informational structure, utility function), the axioms imposed on the updating behavior, and the network structure. In particular, we provide a normative foundation for quasi‐Bayesian updating, where a quasi‐Bayesian agent treats others' actions as if they were based only on their private signal. Quasi‐Bayesian updating induces learning (i.e., convergence to the optimal action for every agent in every connected network) only in highly asymmetric environments. In all other environments, learning fails in networks with a diameter larger than 4. Finally, we consider a richer class of updating behavior that allows for nonstationarity and differential treatment of neighbors' actions depending on their position in the network. We show that within this class there exist updating systems that induce learning for most networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call