Abstract

Optimization of dynamical processes, which constitute the well-defined sequences of steps in time or space, is considered. These processes can be either discrete or continuous. Optimization theories for discrete and continuous processes differ in general, in assumptions, in formal description, and in the strength of optimality conditions. We focus on locally optimal conditions for both discrete and continuous process models. Bellman's dynamic programming method and his recurrence equation are employed to derive optimality conditions and to show the passage from the Hamilton–Jacobi–Bellman equation to the classical Hamilton–Jacobi equation. As a rule, the use of a computer is assumed to obtain a numerical solution to an optimization problem. A solving procedure for a discrete recurrence equation runs successively for stages n=1, 2, …, N. At each stage, the previously obtained data of optimal profit fn−1(xn−1) serve to find optimal controls ûn in terms of state coordinates xn, that is, the vector function ûn(xn). Special discrete processes linear with respect to free intervals of continuous time tn are investigated, and it is shown that a Pontryagin-like Hamiltonian Hn is constant along an optimal trajectory. Yet, it is stressed that in order to achieve the absolute maximum for Hn, an optimal discrete process requires much stronger assumptions for rate functions and constraining sets than the continuous process. Nondifferentiable (viscosity) solutions to HJB equations are briefly discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call