Abstract

An adaptive control problem of a discrete time Markov process that is completely observed in a fixed recurrent domain and is partially observed elsewhere is formulated and a solution is given by constructing an approximately self-optimal strategy. The state space of the Markov process is either a closed subset of Euclidean space or a countable set. Another adaptive control problem is solved where the process is always only partially observed but there is a family of random times when the process evaluated at these times is a family of independent, identically distributed random variables.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call