Abstract

In this work, we provide discrete optimality conditions of the optimal control problems of stochastic differential equations. Euler and Runge–Kutta methods are used for discretization. A Lagrange multiplier method for a discrete-time stochastic optimal control problem is formulated. The discrete adjoint process is obtained in terms of conditional expectations and for both methods. To estimate these nested conditional expectations at each time step via simulation, we use a very powerful new approach, least-squares Monte-Carlo method, developed by Longstaff–Schwartz. This is the first time to solve a stochastic optimal control problem by calculating the nested conditional expectations numerically with the help of a least-squares Monte-Carlo method. Some examples are studied to test and demonstrate the efficiency of the Lagrange multiplier combined with the least-squares Monte-Carlo method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call