Abstract

We consider a finite-horizon continuous-time optimal control problem with nonlinear dynamics, an integral cost, control constraints and a time-varying parameter which represents perturbations or uncertainty. After discretizing the problem we employ a Model Predictive Control (MPC) approach by first solving the problem over the entire remaining time horizon and then applying the first element of the optimal discrete-time control sequence, as a constant in time function, to the continuous-time system over the sampling interval. Then the state at the end of the sampling interval is measured (estimated) with certain error, and the process is repeated at each step over the remaining horizon. As a result, we obtain a piecewise constant function of time representing MPC-generated control signal. Hence MPC turns out to be an approximation to the optimal feedback control for the continuous-time system. In our main result we derive an estimate of the difference between the MPC-generated state and control trajectories and the optimal feedback generated state and control trajectories, both obtained for the same value of the perturbation parameter, in terms of the step-size of the discretization and the measurement error. Numerical results illustrating our estimate are reported.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call