Abstract
The paper deals with field-based coordination of agent team in which the continental divide game is applied as a coordination mechanism. The agent team consists of self-interested mobile intelligent agents whose behaviour is modelled using coordination policies based on adaptive learning algorithms. Three types of learning algorithms have been used: three parameter Roth-Erev algorithm, stateless Q-learning algorithm, and experience-weighted attraction algorithm. The coordination policies are analyzed by replicator dynamics from evolutionary game theory. A case study describing performance evaluation of coordination policies according to the analysis is considered.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.