Abstract

Model predictive control (MPC) has emerged as a predominant method in the realm of control systems; yet, it faces distinct challenges. First, MPC often hinges on the availability of a precise and accurate system model, where even minor deviations can drastically affect the control performance. Second, it entails a high computational load due to the need to solve complex optimization problems in real time. This study introduces an innovative method that harnesses the probabilistic nature of Gaussian processes (GPs), offering a solution that is robust, adaptive, and computationally efficient for optimal control. Our methodology commences with the collection of data to learn optimal control policies. We then proceed with offline training of GPs on these data, which enables these processes to accurately grasp system dynamics, establish input–output relationships, and, crucially, identify uncertainties, thereby informing the MPC framework. Utilizing the mean and uncertainty estimates derived from GPs, we have crafted a controller that is capable of adapting to system deviations and maintaining consistent performance, even in the face of unforeseen disturbances or model inaccuracies. The convergence of the closed-loop system is assured through the application of the Lyapunov stability theorem. In our numerical experiments, the exemplary performance of our approach is demonstrated, notably in its capacity to adeptly handle the complexities of dynamic systems, even with limited training data, underlining a significant leap forward in MPC strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call