Abstract

AbstractA partially observable Markov decision process (POMDP) is an appropriate mathematical modeling tool for dynamic stochastic systems where portions or all of the system states are not completely observable to the decision maker. In this respect, POMDPs generalize completely observable Markov decision processes (MDPs) by allowing infinitely many states to address partial observability. However, the resulting models suffer tremendously from computational intractability even for relatively small problems. Therefore, POMDPs are frequently approximated by solving variants of completely observable MDPs defined on a grid of finite states. This article summarizes the relationships between completely and partially observable MDPs and derives inequalities for the POMDP value function using the optimal value function of the grid‐based MDPs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call