Abstract

In multiagent systems, social norms is a useful technique in regulating agents' behaviors to achieve coordination or cooperation among agents. One important research question is to investigate how a desirable social norm can be evolved in a bottom-up manner through local interactions. In this paper, we propose two novel learning strategies under the collective learning framework: collective learning EV-l and collective learning EV-g, to efficiently facilitate the emergence of social norms. Experimental results show that both learning strategies can support the emergence of desirable social norms more efficiently in a much broader range of multiagent interaction scenarios than previous work, and also are robust across different network topologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call