Abstract
Stochastic model predictive control (SMPC) approximates the solution to constrained stochastic optimal control problems by solving a simplified problem repeatedly over a reduced prediction horizon. This paper demonstrates and discusses significant open challenges for current SMPC methods in terms of their closed-loop performance and conservatism regarding constraint satisfaction. In particular, we compare two forms of formulating chance constraints in SMPC. First, we consider a direct feedback formulation, which corresponds to the typical implementation of SMPC. Direct feedback formulates chance constraints for the predicted state distribution conditioned on the current measured state at each time step during the receding horizon control. Indirect feedback, in contrast, formulates constraints by introducing a suitable nominal state, which allows to enforce chance constraints on the closed loop. In numerical examples, we demonstrate that direct feedback, i.e. the typical form of SMPC, can result in significant conservatism, allowing almost no constraint violations. This results in significantly reduced performance, which we show can be alleviated with indirect feedback formulations. In addition, we prove that indirect feedback can recover the unconstrained optimal solution given by LQR control whenever it is feasible also for the constrained optimal control problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.