Abstract

AbstractMulti-agent systems are complex systems in which multiple autonomous entities, called agents, cooperate in order to achieve a common or personal goal. These entities may be computer software, robots, and also humans. In fact, many multi-agent systems are intended to operate in cooperation with or as a service for humans. Typically, multi-agent systems are designed assuming perfectly rational, self-interested agents, according to the principles of classical game theory. Recently, such strong assumptions have been relaxed in various ways. One such way is explicitly including principles derived from human behavior. For instance, research in the field of behavioral economics shows that humans are not purely self-interested. In addition, they strongly care aboutfairness. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations and may not reach intended goals. In this paper, we present an overview of work in the area of fairness in multi-agent systems. More precisely, we first look at the classical agent model, that is, rational decision making. We then provide an outline of descriptive models of fairness, that is, models that explain how and why humans reach fair decisions. Then, we look at prescriptive, computational models for achieving fairness in adaptive multi-agent systems. We show that results obtained by these models are compatible with experimental and analytical results obtained in the field of behavioral economics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call