Abstract

This paper provides a concise guide to dynamic optimization with an integral treatment on various optimal control and dynamic programming problems. It presents essential theorems and methods for obtaining and characterizing solutions to these problems. The paper discusses Pontryagin's maximum principle in optimal control theory under infinite-time horizon and fixed and variable finite-time horizons, discounting vs. no discounting, discrete- vs. continuous-time cases, and the classical calculus of variations method. It also discusses Bellman's principle of optimality in dynamic programming and its relation to the maximum principle. Some elements of stochastic dynamic programming are also discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call