Abstract

We consider optimal control problems where the state X(t) at time t of the system is given by a stochastic differential delay equation. The growth at time t not only depends on the present value X(t), but also on X(t-δ) and some sliding average of previous values. Moreover, this dependence may be nonlinear. Using the dynamic programming principle we derive an associated (finite dimensional) Hamilton-Jacobi-Bellman equation for the value function of such problems. This (finite dimensional) HJB equation has solutions if and only if the coefficients satisfy a particular system of first order PDEs. We introduce viscosity solutions for the type of HJB-equations that we consider, and prove that under certain conditions, the value function is the unique viscosity solution to the HJB-equation. We also give numerical examples for two cases where the HJB-equation reduces to a finite dimensional one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call