Abstract

How to enforce cooperation among unrelated agents is one of the main concerns in distributed artificial intelligence. The limited success of existing efforts to achieve large-scale cooperation is partly related to the lack of strong sanctioning mechanism to restrain defective agents who undermine cooperation. Here we use the framework of evolutionary game theory to investigate the emergence of cooperation in collective risk dilemma (CRD) through the implementation of instrumental ostracism on defective agents. By analyzing the stochastic model, we find that instrumental ostracism can better promote the emergence of widespread cooperation than punishment. Such advantage is more obvious when the risk is relatively low. Furthermore, we verify that the polycentric sanctioning institutions including punishment and ostracism are more effective than a single global one in improving the level of cooperation. Our model can be applied to many collective action problems, such as climate change and vaccination in Covid-19, so as to further provide some suggestions for decision makers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call