Abstract

Data-Driven Predictive Control (DDPC) has been recently proposed as an effective alternative to traditional Model Predictive Control (MPC), in that the same constrained optimization problem can be addressed without the need to explicitly identify a full model of the plant. However, DDPC is built upon input/output trajectories. Therefore, the finite sample effect of stochastic data, due to, e.g., measurement noise, may have a detrimental impact on closed-loop performance. Exploiting a formal statistical analysis of the prediction error, in this paper we propose the first systematic approach to deal with uncertainty due to finite sample effects. To this end, we introduce two regularization strategies for which, differently from existing regularization-based DDPC techniques, we propose a tuning rationale allowing us to select the regularization hyper-parameters before closing the loop and without additional experiments. Simulation results confirm the potential of the proposed strategy when closing the loop.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call