Abstract

The last twenty years have seen a great flourishing in optimal control theory. In this paper, we shall highlight some of the salient theoretical developments in the specific area of algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first and second variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton–Raphson, or Gauss–Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these we find methods for solving the unconstrained linear quadratic regulator problem as well as certain constrained minimum time and minimum energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional gradient method, the gradient projection method and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called $\varepsilon $-methods combine the Ritz method with penalty function techniques. Some of the recent work has dealt with discretization effects. This has led to the concept of well-posedness of a problem and to adaptive integration techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call