Abstract

An important problem in open multiagent systems is that of the regulation of social exchanges, toward producing social equilibrium. This problem may be generalized to the regulation of autonomous agents’ interactions when cooperating/competing in order to achieve their individual, collective objectives. In this paper, we take an abstract and generalizing approach to this issue. The problem is formalized as a regulation model for the sequential decision making of an agent, acting in an open partially observable stochastic environment, with the aim to induce another autonomous agent to interact in certain way, so as to lead both agents toward a target exchange state configuration. The regulation model is defined as a combination of a partially observable Markov decision process (POMDP), to structure the regulator agent decision process, with a Hidden Markov Model (HMM), to structure its exchange strategy learning process. The main challenge we face is the reciprocal conversion between POMDPs and HMMs. The solution we have found builds on the particular structures of the POMDPs and HMMs that arise in the context of the regulation of social exchanges, which allow for the establishment of a kind of isomorphism between the two models. This paper formally develops these ideas, stating and proving the conversion theorems, and shows their application to an example of regulation of social exchanges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.