Abstract

We test whether deviations from Nash equilibrium in rent-seeking contests can be explained by the slow convergence of payoff-based learning. We identify and eliminate two noise sources that slow down learning: first, opponents are changing their actions across rounds; second, payoffs are probabilistic, which reduces the correlation between expected and realized payoffs. We find that average choices are not significantly different from the risk-neutral Nash equilibrium predictions only when both noise sources are eliminated by supplying foregone payoff information and removing payoff risk. Payoff-based learning can explain these results better than alternative theories. We propose a hybrid learning model that combines reinforcement and belief learning with risk, social and other preferences, and show that it fits data well, mostly because of reinforcement learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.