Abstract

Complex collective motion patterns can emerge from very simple local interactions among individual agents. However, it is still unclear how and why the interactions among individuals lead to the emergence of collective motion. Modeling is an effective way to understand the mechanisms that govern collective animal motions. In this work, to avoid imposing fixed sets of rules on collective motion models a priori as classical approaches do, we propose a new method of modeling collective motion for fish schooling via multi-agent reinforcement learning. We model each fish individual as an artificial learning agent, whose policy is acquired by using mean field Q-learning (MFQ). The observation of each fish agent is represented as a multi-channel image, where each channel describes a different feature, such as an agent's position or an agent's orientation. The policy of an agent is approximated with a neural network trained with the MFQ algorithm, during which, agents are rewarded (or penalized) according to the number of neighbors and consecutive collisions between individuals. We study the dynamics of collective motion that emerge from the learned policy. The experimental results show that the learned policy can produce collective motion in groups of various sizes. In addition, three different collective motion patterns observed in nature emerged during the training process. The learned policy can help us gain new insight into how and why individual interactions lead to collective motion. This study also demonstrates that multi-agent reinforcement learning has great potential to be a new approach for analysis and modeling of collective motion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call