Abstract

The Receding Horizon Control (RHC) strategy consists in replacing an infinite-horizon stabilization problem by a sequence of finite-horizon optimal control problems, which are numerically more tractable. The dynamic programming principle ensures that if the finite-horizon problems are formulated with the exact value function as a terminal penalty function, then the RHC method generates an optimal control. This article deals with the case where the terminal cost function is chosen as a cut-off Taylor approximation of the value function. The main result is an error rate estimate for the control generated by such a method, when compared with the optimal control. The obtained estimate is of the same order as the employed Taylor approximation and decreases at an exponential rate with respect to the prediction horizon. To illustrate the methodology, the article focuses on a class of bilinear optimal control problems in infinite-dimensional Hilbert spaces.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call