Abstract

In this paper we review two approaches to the regulation of agent interactions based on Piaget's theory of social exchanges. These approaches model a social equilibrium supervisor, that, at each time, recommends certain exchange actions to the agents, in order to lead the interaction towards the equilibrium, regarding the balance of the exchange values involved in the exchanges. One approach uses a centralized supervisor, that has access to all agents' internal state, and give recommendations to lead the agents to an equilibrium in their exchanges. This centralized supervisor uses a Qualitative Interval Markov Decision Process (QI-MDP), to determine the best recommendation for the agents. The other approach is a decentralized one, in which each agent has an equilibrium supervisor internalized in it. In this model, each supervisor in each agent has access to the agent's internal state where he is in, but is unable to access the internal states of the other agents. In order to give exchange recommendations to the supervised agent, the internalized supervisor uses BDI (Beliefs, Desires, Intentions) plans derived from the optimal interaction policy provided by a Partially Observable Markov Decision Process (POMDP). We present an analysis of the two approaches, aiming at the identification of which features of each approach can be used to improve the other one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call