Abstract

A number of well known methods exist for solving Markov decision problems (MDP) involving a single decision-maker with or without model uncertainty. Recently, there has been great interest in the multi-agent version of the problem where there are multiple interacting decision makers. However, most of the suggested methods for multi-agent MDPs require complete knowledge concerning the state and action of all agents. This, in turn, results in a large communication overhead when the agents are physically distributed. In this paper, we address the problem of coping with uncertainty regarding the agent states and action with different amounts of communication. In particular, assuming a known model and common reward structure, hidden Markov models and techniques for partially observed MDPs are combined to estimate the states or actions (or both) of other agents. Simulation results are presented to compare the performances that can be realized under different assumptions on agent communications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.