Abstract

The convex analytic method has proved to be a very versatile method for the study of infinite horizon average cost optimal stochastic control problems. In this paper, we revisit the convex analytic method and make three primary contributions: (i) We present an existence result for controlled Markov models that lack weak continuity of the transition kernel but are strongly continuous in the action variable for every fixed state variable. (ii) For average cost stochastic control problems in standard Borel spaces, while existing results establish the optimality of stationary (possibly randomized) policies, few results are available on the optimality of deterministic policies. We review existing results and present further conditions under which an average cost optimal stochastic control problem admits optimal solutions that are deterministic stationary. (iii) We establish conditions under which the performance under stationary deterministic (and also quantized) policies is dense in the set of performance values under randomized stationary policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call