Abstract
Using the linear programming approach to stochastic control introduced by Buckdahn, Goreac, and Quincampoix, and by Goreac and Serea, we provide a semigroup property for some set of probability measures leading to dynamic programming principles for stochastic control problems. An abstract principle is provided for general bounded costs. Linearized versions are obtained under further (semi)continuity assumptions.
Highlights
Using the linear programming approach to stochastic control introduced in [6] and [10], we provide a semigroup property for some set of probability measures leading to dynamic programming principles for stochastic control problems
Linear programming tools have been efficiently used to deal with stochastic control problems
An approach relying mainly on Hamilton-Jacobi(-Bellman) equations has been developed in [9] for deterministic control systems. This approach has been generalized to controlled Brownian diffusions
Summary
Linear programming tools have been efficiently used to deal with stochastic control problems (see [3], [4], [11], [12], [13], [14] and references therein). Using Hamilton-Jacobi-Bellman techniques, it is proven in [6] and [10] that minimizing continuous cost functionals with respect to the new set of constraints leads to the same value. These formulations turn out to provide the generalized solution of the (discontinuous) Hamilton-Jacobi-Bellman equation. An alternative is to use a weak formulation where the value function is replaced by a test function (cf [5]) This short paper aims at giving another approach to the dynamic programming principles based on linear programming techniques (cf [6], [10]). This becomes an equality providing a linearized programming principle if the cost functionals are continuous
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have