Abstract

We study a general approximation scheme for infinite-dimensional linear programming (LP) problems which arise naturally in stochastic control. We prove that the optimal value of the approximating problems converges to the value of the original LP problem. For the controls, we show that if the approximating optimal controls converge, the limiting control is an optimal control for the original LP problem. As an application of this theory, we present numerical approximations to the LP formulation of stochastic control problems in continuous time. We study long-term average and discounted control problems. For the example for which the theoretical solution is known, our approximation results are very accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call