Abstract

Political scientists use experiments to test the predictions of game-theoretic models. In a typical experiment, each subject makes choices that determine her own earnings and the earnings of other subjects, with payments corresponding to the utility payoffs of a theoretical game. But social preferences distort the correspondence between a subject’s cash earnings and her subjective utility, and since social preferences vary, anonymously matched subjects cannot know their opponents’ preferences between outcomes, turning many laboratory tasks into games of incomplete information. We reduce the distortion of social preferences by pitting subjects against algorithmic agents (“Nashbots”). Across 11 experimental tasks, subjects facing human opponents played rationally only 36% of the time, but those facing algorithmic agents did so 60% of the time. We conclude that experimentalists have underestimated the economic rationality of laboratory subjects by designing tasks that are poor analogies to the games they purport to test.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.