Abstract

I study social learning in networks with information acquisition and choice. Bayesian agents act in sequence, observe the choices of their connections, and acquire information via sequential search. Complete learning occurs if search costs are not bounded away from zero and the network is sufficiently connected and has identifiable information paths. If search costs are bounded away from zero, complete learning is possible in many stochastic networks, including almost-complete networks, but even a weaker notion of long-run learning fails in many other networks. When agents observe random numbers of immediate predecessors, the rate of convergence, the probability of wrong herds, and long-run efficiency properties are the same as in the complete network. The density of indirect connections affects convergence rates. Network transparency has short-run implications for welfare and efficiency. Simply letting agents observe the shares of earlier choices reduces inefficiency and welfare losses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call