Abstract

We show how the classical Lagrangian approach to solving constrained optimization problems from standard calculus can be extended to solve continuous time stochastic optimal control problems. Connections to mainstream approaches such as the Hamilton-Jacobi-Bellman equation and the stochastic maximum principle are drawn. Our approach is linked to the stochastic maximum principle, but more direct and tied to the classical Lagrangian principle, avoiding the use of backward stochastic differential equations in its formulation. Using infinite dimensional functional analysis, we formalize and extend the approach first outlined in Chow (1992) within a rigorous mathematical setting using infinite dimensional functional analysis. We provide examples that demonstrate the usefulness and effectiveness of our approach in practice. Further, we demonstrate the potential for numerical applications facilitating some of our key equations in combination with Monte Carlo backward simulation and linear regression, therefore illustrating a completely different and new avenue for the numerical application of Chow's methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call