Abstract

Prior research has used reinforcement-learning models to investigate human decisions in choice games. However, research has not investigated how reinforcement-learning models expectancy valence learning (EVL) and prospect valence learning (PVL) would explain human decisions in applied judgment games where people face a collective risk social dilemma (CRSD) against societal problems such as climate change. In CRSD game, a group of players invested some part of their private incomes to a public fund over several rounds with the goal of collectively reaching a climate target, failing which climate change would occur with a certain probability making players lose their remaining incomes. In this article, we propose EVL and PVL models in the CRSD game and calibrate model parameters to aggregate and individual human decisions across four between-subjects information conditions, where half of the players in each condition possessed lesser wealth (poor) compared to the other half (rich). Results showed that model calibration to individual decisions provided a more accurate account compared to the calibration to aggregate decisions and the EVL model was better fit compared to the PVL model across most conditions. Both models outperformed the symmetric Nash model across all conditions. Overall, moderate recency, loss aversion, and exploration drove people’s decisions. We present the implications of our model results for situations involving a CRSD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call