Abstract

The Receding Horizon Control (RHC) strategy consists in replacing an infinite-horizon stabilization problem by a sequence of finite-horizon optimal control problems, which are numerically more tractable. The dynamic programming principle ensures that if the finite-horizon problems are formulated with the exact value function as a terminal penalty function, then the RHC method generates an optimal control. This article deals with the case where the terminal cost function is chosen as a cut-off Taylor approximation of the value function. The main result is an error rate estimate for the control generated by such a method, when compared with the optimal control. The obtained estimate is of the same order as the employed Taylor approximation and decreases at an exponential rate with respect to the prediction horizon. To illustrate the methodology, the article focuses on a class of bilinear optimal control problems in infinite-dimensional Hilbert spaces.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.