Abstract

The stability of stochastic Model Predictive Control (MPC) subject to additive disturbances is often demonstrated in the literature by constructing Lyapunov-like inequalities that guarantee closed-loop performance bounds and boundedness of the state, but convergence to a terminal control law is typically not shown. In this work we use results on general state space Markov chains to find conditions that guarantee convergence of disturbed nonlinear systems to terminal modes of operation, so that they converge in probability to a priori known terminal linear feedback laws and achieve time-average performance equal to that of the terminal control law. We discuss implications for the convergence of control laws in stochastic MPC formulations, in particular we prove convergence for two formulations of stochastic MPC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call