Abstract

Concerns about models of cultural adaptation as analogs of genetic selection have led cognitive game theorists to explore learning-theoretic specifications. Two prominent examples, the Bush-Mosteller stochastic learning model and the Roth-Erev payoff-matching model, are aligned and integrated as special cases of a general reinforcement learning model. Both models predict stochastic collusion as a backward-looking solution to the problem of cooperation in social dilemmas based on a random walk into a self-reinforcing cooperative equilibrium. The integration uncovers hidden assumptions that constrain the generality of the theoretical derivations. Specifically, Roth and Erev assume a “power law of learning”—the curious but plausible tendency for learning to diminish with success and intensify with failure. Computer simulation is used to explore the effects on stochastic collusion in three social dilemma games. The analysis shows how the integration of alternative models can uncover underlying principles and lead to a more general theory.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.