Abstract

This paper considers optimal feedback control policies for a class of discrete stochastic distributed-parameter systems. The class under consideration has the property that the random variable in the dynamic systems depends only on the time and possesses the Markovian property with stationary transition probabilities. A necessary condition for optimality of a feedback control policy, which has form similar to the Hamiltonian form in the deterministic case, is derived via a dynamic programming approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call