Abstract

There have been numerous studies on collective behavior, among which communication between agents can have a great impact on both the payoff and the cost of making decisions. Research usually focuses on how to improve the collective synchronization rate or accelerate the process of cooperation under given communication cost constraints. In this context, evolutionary game theory (EGT) and reinforcement learning (RL) arise as essential frameworks for tackling this intricate problem. In this study, an adapted Vicsek model is introduced, wherein agents exhibit varying movement patterns contingent on their chosen strategies. Each agent gains a payoff determined by the advantages of collective motion juxtaposed with the cost of communicating with neighboring agents. Individuals choose the objective agents based on the Q-learning strategy and then adapt their strategies following the Fermi rule. The research reveals that the utmost level of cooperation and synchronization can be attained at an optimal communication radius after applying Q-learning. Similar conclusions have been drawn from research on the influence of random noise and relative cost. Different cost functions were considered in the study to demonstrate the robustness of the proposed model and conclusions under a wide range of conditions. (https://github.com/WangchengjieT/VM-EGT-Q)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.