Abstract

The design of rules governing the behaviour of a follower in a leader-follower system is a non-trivial task. In this paper, we investigate three Boids-like behavioural rules: alignment, attraction and separation. We systematically design and investigate the impact of different reward functions on the three behaviours using evolutionary computation methods. A Learning Classifier System initially starting from a set of random rules is used to evolve the Follower behaviour of agents within a simulated leader-follower environment. We present a series of systemic experiments to reveal and understand the interdependency between the incrementally-designed reward functions and the performance of the Follower in conforming to the three parameters. We demonstrate that an incremental and systemic design of the reward function is sufficient to reproduce Reynolds' rules from zero domain knowledge and that the solution is robust against shifts caused by evolutionary dynamics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call