Abstract
S OLVING an optimal control problem using a digital computer implies discrete approximations. Since the 1960s, there have been well-documented [1–3] naive applications of Pontryagin’s principle in the discrete domain. Although its incorrect applications continue to this day, the origin of the naivete is quite understandable because one has a reasonable expectation of the validity of Pontryagin’s principle within a discrete domain. That an application of the Hamiltonian minimization condition is not necessarily valid in a discrete domain [1,4] opens up a vast array of questions in theory and computation [2,5]. These questions continue to dominate the meaning and validity of discrete approximations and computational solutions to optimal control problems [6–10]. Among these questions is the convergence of discrete approximations in optimal control. About the year 2000, there were a number of key discoveries on the convergence of discrete approximations [9,11–14]. Among other things, Hager [9] showed, by way of a counterexample, that a convergent Runge–Kutta (RK) method may not converge. This seemingly contradictory result is actually quite simple to explain [10]: long-established convergence results on ordinary differential equations do not necessarily apply for optimal control problems. Thus, an RK method that is convergent for an ordinary differential equation may not converge when applied to an optimal control problem. Not only does this explain the possibility of erroneous results obtained through computation, it also explains why computational optimal control has heretofore been such a difficult problem. The good news is that if a proper RKmethod is used (those developed by Hager [9]), convergence can be assured under a proper set of conditions. Whereas RK methods have a long history of development for ordinary differential equations, pseudospectral (PS) methods have had a relatively short history of development for optimal control. In parallel to Hager’s [9] discovery on RK methods, recent developments [8,15–17] show that the convergence theory for PS approximations in optimal control is sharply different from that used in solving partial differential equations. Furthermore, the convergence theory for PS approximations is also different from the one used in RK approximations to optimal control. A critical examination of convergence of approximations using the new theories developed in recent years has not only begun to reveal the proper computational techniques for solving optimal control problems, it has also exposed the fallacy of long-held tacit assumptions. For instance, Ross [18] showed, by way of a simple counterexample, that an indirect method generates the wrong answer, whereas a direct method generates the correct solution. This counterexample exposed the fallacy of the long-held belief that indirect methods are more accurate than direct methods. In this Note, we show, by way of another counterexample, that the convergence of the costates does not imply convergence of the control. This result appears to have more impact on the convergence of PS approximations in optimal control than the convergence of RK approximations because of the significant differences between the two theories; consequently, we restrict our attention to the impact of this result on PS methods, noting, nonetheless, the generality of this assertion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.