Abstract
Data-Driven Predictive Control (DDPC) has been recently proposed as an effective alternative to traditional Model Predictive Control (MPC), in that the same constrained optimization problem can be addressed without the need to explicitly identify a full model of the plant. However, DDPC is built upon input/output trajectories. Therefore, the finite sample effect of stochastic data, due to, e.g., measurement noise, may have a detrimental impact on closed-loop performance. Exploiting a formal statistical analysis of the prediction error, in this paper we propose the first systematic approach to deal with uncertainty due to finite sample effects. To this end, we introduce two regularization strategies for which, differently from existing regularization-based DDPC techniques, we propose a tuning rationale allowing us to select the regularization hyper-parameters before closing the loop and without additional experiments. Simulation results confirm the potential of the proposed strategy when closing the loop.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.