Abstract

We present differentiable predictive control (DPC) as a deep learning-based alternative to the explicit model predictive control (MPC) for unknown nonlinear systems. In the DPC framework, a neural state-space model is learned from time-series measurements of the system dynamics. The neural control policy is then optimized via stochastic gradient descent approach by differentiating the MPC loss function through the closed-loop system dynamics model. The proposed DPC method learns model-based control policies with state and input constraints, while supporting time-varying references and constraints. In embedded implementation using a Raspberry-Pi platform, we experimentally demonstrate that it is possible to train constrained control policies purely based on the measurements of the unknown nonlinear system. We compare the control performance of the DPC method against explicit MPC and report efficiency gains in online computational demands, memory requirements, policy complexity, and construction time. In particular, we show that our method scales linearly compared to exponential scalability of the explicit MPC solved via multiparametric programming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call