Abstract

In efforts to resolve social dilemmas, reinforcement learning is an alternative to imitation and exploration in evolutionary game theory. While imitation and exploration rely on the performance of neighbors, in reinforcement learning individuals alter their strategies based on their own performance in the past. For example, according to the Bush–Mosteller model of reinforcement learning, an individual’s strategy choice is driven by whether the received payoff satisfies a preset aspiration or not. Stimuli also play a key role in reinforcement learning in that they can determine whether a strategy should be kept or not. Here we use the Monte Carlo method to study pattern formation and phase transitions towards cooperation in social dilemmas that are driven by reinforcement learning. We distinguish local and global players according to the source of the stimulus they experience. While global players receive their stimuli from the whole neighborhood, local players focus solely on individual performance. We show that global players play a decisive role in ensuring cooperation, while local players fail in this regard, although both types of players show properties of ‘moody cooperators’. In particular, global players evoke stronger conditional cooperation in their neighborhoods based on direct reciprocity, which is rooted in the emerging spatial patterns and stronger interfaces around cooperative clusters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.