Abstract
We study the long-run conventions emerging in a stag-hunt game when agents are myopic best responders. Our main novel assumption is that errors converge to zero at a rate that is positively related to the payoff earned in the past. To fully explore the implications of this error model, we introduce a further novelty in the way we model the interaction structure, assuming that with positive probability agents remain matched together in the next period. We find that, if interactions are sufficiently persistent over time, then the payoff-dominant convention emerges in the long run, while if interactions are quite volatile, then the maximin convention can emerge even if it is not risk-dominant. We contrast these results with those obtained under two alternative error models: uniform mistakes and payoff-dependent mistakes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.