Abstract

SummarySufficient conditions for the stability of stochastic model predictive control without terminal cost and terminal constraints are derived. Analogous to stability proofs in the nominal setup, we first provide results for the case of optimization over general feedback laws and exact propagation of the probability density functions of the predicted states. We highlight why these results, being based on the principle of optimality, do not directly extend to currently used computationally tractable approximations such as optimization over parameterized feedback laws and relaxation of the chance constraints. Based thereon, for both cases, stability results are derived under stronger assumptions. A third approach is presented for linear systems where propagation of the mean value and the covariance matrix of the states instead of the complete distribution is sufficient, and hence, the principle of optimality can be used again. The main results are presented for nonlinear systems along with examples and computational simplifications for linear systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call