Abstract

We describe the reciprocal agents that build virtual associations in accordance with past cooperative work in a bottom-up manner and that allocate tasks or resources preferentially to agents in the same associations in busy large-scale distributed environments. Models of multiagent systems (MAS) are often used to express tasks that are done by teams of cooperative agents, so how each subtask is allocated to appropriate agents is a central issue. Particularly in busy environments where multiple tasks are requested simultaneously and continuously, simple allocation methods in self-interested agents result in conflicts, meaning that these methods attempt to allocate multiple tasks to one or a few capable agents. Thus, the system's performance degrades. To avoid such conflicts, we introduce reciprocal agents that cooperate with specific agents that have excellent mutual experience of cooperation. They then autonomously build associations in which they try to form teams for new incoming tasks. We introduce the N-agent team formation (TF) game, an abstract expression of allocating problems in MAS by eliminating unnecessary and complicated task and agent specifications, thereby identifying the fundamental mechanism to facilitate and maintain associations. We experimentally show that reciprocal agents can considerably improve performance by reducing the number of conflicts in N-agent TF games with different N values by establishing association structures. We also investigate how learning parameters to decide reciprocity affect association structures and which structure can achieve efficient allocations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call