Abstract

This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision processes (I-POMDPs). I-POMDPs are generalizations of POMDPs, a well-known framework for decision-theoretic planning in uncertain domains, to cases when an agent needs to plan a course of action in an environment populated by other agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call