Abstract

We interpret the following fully nonlinear second-order partial differential equation as the value function of a certain optimal controlled diffusion problem, where is a second order elliptic partial differential operator parametrized by the control variable αϵA: with Here σ,b, and c are functions defined on with values respectively in and is a real function defined on . A particular case of this equation is when . In this case, the equation is the well-known Hamilton-Jacobi-Bellman equation. The problem is formulated as follows: The state equation of the control problem is a classical one. The cost function is described by an adapted solution of a certain backward stochastic differential equation. The paper discusses Bellman's dynamic programming principle for this problem The value function is proved to be a viscosity solution of the above possibly degenerate fully nonlinear equation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call