Abstract

In an earlier analysis of strong variation algorithms for optimal control problems with endpoint inequality constraints, Mayne and Polak provided conditions under which accumulation points satisfy a condition requiring a certain optimality function, used in the algorithms to generate search directions, to be nonnegative for all controls. The aim of this paper is to clarify the nature of this optimality condition, which we call the first-order minimax condition, and of a related integrated form of the condition, which, also, is implicit in past algorithm convergence analysis. We consider these conditions, separately, when a pathwise state constraint is, and is not, included in the problem formulation. When there are no pathwise state constraints, we show that the integrated first-order minimax condition is equivalent to the minimum principle and that the minimum principle (and equivalent integrated first-order minimax condition) is strictly stronger than the first-order minimax condition. For problems with state constraints, we establish that the integrated first-order minimax condition and the minimum principle are, once again, equivalent. But, in the state constrained context, it is no longer the case that the minimum principle is stronger than the first-order minimax condition, or vice versa. An example confirms the perhaps surprising fact that the first-order minimax condition is a distinct optimality condition that can provide information, for problems with state constraints, in some circumstances when the minimum principle fails to do so.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call