Abstract

Many organizations employ algorithms that learn from their members and then shape the way these individuals learn. Nevertheless, decades of research on organizational learning suggests that imperfect learning algorithms could sustain suboptimal beliefs that trap organizations indefinitely. To study potential algorithmic learning traps, we solve the underexplored theoretical properties of the March 1991 mutual learning model and demonstrate the conditions under which individuals should trust learning algorithms' recommendations. Our results show that the received wisdom regarding the benefit of slow learning and diversity does not hold when algorithms cannot identity accurate beliefs but follow the majority. The presence of non- discerning or even manipulated algorithms suggests that individuals should learn fast instead of slow to reduce the chance that algorithms learn the wrong, misleading lessons that would otherwise diffuse and contaminate everyone. Our exploitation of the March model generates novel insights that are increasingly relevant, thus promoting the model's generalization and making its beauty more robust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call