Abstract

Model Predictive Control has proven to be a universal and flexible method to control complex nonlinear system with guaranteed constraint satisfaction. However, high dependency on model quality often renders it inappropriate for hard to model systems. On the other hand, machine learning methods show great performance when approximating functions based on data. This capability for learning with poor a priori knowledge, however, comes at the cost of low predictability and lack of safety guarantees. To overcome these drawbacks we illustrate how a neural network can be setup as a nonlinear feedforward control that augments the MPC control signal to approximate a desired control behaviour. For instance, it could aim to mimic the control behaviour of a human driver, while the underlying MPC exploits prior knowledge. Moreover, to preserve constraint satisfaction, we suggest to restrict the range of neural network outputs such that it intrinsically satisfies control input constraints. Subsequently, we represent the neural network control signal as a disturbance which enables the application of tube MPC to retain state constraints satisfaction at the cost of introducing some conservatism. We demonstrate these concepts via simulation, test and highlight both the advantages and the drawbacks of the proposed control structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call