Abstract

Recent advances in technology allow multi-agent systems to be deployed in cooperation with or as a service for humans. Typically, those systems are designed assuming individually rational agents, according to the principles of classical game theory. However, research in the field of behavioral economics has shown that humans are not purely self-interested: they strongly care about fairness. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations and may not reach intended goals. In this paper, we present a computational model for achieving fairness in adaptive multi-agent systems. The model uses a combination of Continuous Action Learning Automata and the Homo Egualis utility function. The novel contribution of our work is that this function is used in an explicit, computational manner. We show that results obtained by agents using this model are compatible with experimental and analytical results on human fairness, obtained in the field of behavioral economics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call