Abstract
Social norms such as social rules and conventions play a pivotal role in sustaining system order by facilitating coordination and cooperation in multiagent systems. This paper studies the neural basis for the emergence of social norms in multiagent systems by modeling each agent as a spiking neural system with a learning capability through reinforcement of stochastic synaptic transmission. A spiking neural learning model is proposed to encode the interaction information in the input spike train of the neural network, and decode the agents' decisions in the output spike train. Learning takes place in the synapses in terms of changing its firing rate, based on the presynaptic spike train, an eligibility trace that records the synaptic actions and the reinforcement feedback from the interactions. Experimental results show that this basic neural level of learning is capable of maintaining emergence of social norms and different learning parameters and encoding methods in the neural system can bring about various macro emergence phenomenon. This paper makes an initial step towards understanding the correlation between neural synaptic activities and global social consistency, and revealing neural mechanisms underlying agent behavioral level of decision making in multiagent systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.