There have been numerous studies on collective behavior, among which communication between agents can have a great impact on both the payoff and the cost of making decisions. Research usually focuses on how to improve the collective synchronization rate or accelerate the process of cooperation under given communication cost constraints. In this context, evolutionary game theory (EGT) and reinforcement learning (RL) arise as essential frameworks for tackling this intricate problem. In this study, an adapted Vicsek model is introduced, wherein agents exhibit varying movement patterns contingent on their chosen strategies. Each agent gains a payoff determined by the advantages of collective motion juxtaposed with the cost of communicating with neighboring agents. Individuals choose the objective agents based on the Q-learning strategy and then adapt their strategies following the Fermi rule. The research reveals that the utmost level of cooperation and synchronization can be attained at an optimal communication radius after applying Q-learning. Similar conclusions have been drawn from research on the influence of random noise and relative cost. Different cost functions were considered in the study to demonstrate the robustness of the proposed model and conclusions under a wide range of conditions. (https://github.com/WangchengjieT/VM-EGT-Q)
Read full abstract