Abstract

This paper presents an agent-based model of team reasoning in a social dilemma game. Starting from the conundrum of empirically high levels of cooperation in dilemma games, contradicting traditional utility maximisation assumptions of game theory, Bacharach (1999, 2006) developed a theory of team reasoning. The idea behind team reasoning is that agents do not try to maximise their own utility but make choices as part of a team. This paper presents a model of preference convergence, mirroring adaptation dynamics of team reasoning. It describes an agent-based model simulating a repeated public goods game between a designated set of agents, a team. In the model agents have a probability to choose cooperation or defection, adjusting this preferences in the face of the revealed preferences of other players. The model is a classic binary choice model mapping an individual's preference for cooperation onto the binary behaviour choice of cooperation and defection. Preferences are updated in reaction to the behaviour choices of the team. Starting from simple stated preferences, the model implements a reframing of utility maximisation as applying to a group rather than an individual, modelling the importance of social interaction for individual preferences and the dependency of choice on social context. Results show that team reasoning, as implemented here, can explain high levels of cooperation found in the real world resulting from a wide range of settings. It also shows that team reasoning, as implemented here, is not a ‘sucker’ strategy except when adaptation rates are very slow. This paper demonstrates how agent-based models can be used to examine the role of social contexts for individual decision making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call