Abstract

Helping strangers at a cost to oneself is a hallmark of many human interactions, but difficult to justify from the viewpoint of natural selection, particularly in anonymous one-shot interactions. Reputational scoring can provide the necessary motivation via "indirect reciprocity," but maintaining reliable scores requires close oversight to prevent cheating. We show that in the absence of such supervision, it is possible that scores might be managed by mutual consent between the agents themselves instead of by third parties. The space of possible strategies for such "consented" score changes is very large but, using a simple cooperation game, we search it, asking what kinds of agreement can i) invade a population from rare and ii) resist invasion once common. We prove mathematically and demonstrate computationally that score mediation by mutual consent does enable cooperation without oversight. Moreover, the most invasive and stable strategies belong to one family and ground the concept of value by incrementing one score at the cost of the other, thus closely resembling the token exchange that underlies money in everyday human transactions. The most successful strategy has the flavor of money except that agents without money can generate new score if they meet. This strategy is evolutionarily stable, and has higher fitness, but is not physically realizable in a decentralized way; when conservation of score is enforced more money-like strategies dominate. The equilibrium distribution of scores under any of this family of strategies is geometric, meaning that agents with score 0 are inherent to money-like strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call