Abstract
Social norms such as social rules and conventions play a pivotal role in sustaining system order by regulating and controlling individual behaviors toward a global consensus in large-scale distributed systems. Systematic studies of efficient mechanisms that can facilitate the emergence of social norms enable us to build and design robust distributed systems, such as electronic institutions and norm-governed sensor networks. This paper studies the emergence of social norms via learning from repeated local interactions in networked multiagent systems. A collective learning framework, which imitates the opinion aggregation process in human decision making, is proposed to study the impact of agent local collective behaviors on the emergence of social norms in a number of different situations. In the framework, each agent interacts repeatedly with all of its neighbors. At each step, an agent first takes a best-response action toward each of its neighbors and then combines all of these actions into a final action using ensemble learning methods. Extensive experiments are carried out to evaluate the framework with respect to different network topologies, learning strategies, numbers of actions, influences of nonlearning agents, and so on. Experimental results reveal some significant insights into the manipulation and control of norm emergence in networked multiagent systems achieved through local collective behaviors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.