Abstract
Model predictive control (MPC) is increasingly being considered for control of fast systems and embedded applications. However, MPC has some significant challenges for such systems, such as its high computational complexity. Further, the MPC parameters must be tuned, which is largely a trial-and-error process that affects the control performance, the robustness, and the computational complexity of the controller to a high degree. This paper presents a multivariate optimization method based on reinforcement learning (RL) that automatically tunes the control algorithm’s parameters from data to achieve optimal closed-loop performance. The main contribution of our method is the inclusion of state-dependent optimization of the meta-parameters of MPC, i.e. parameters that are non-differentiable wrt. the MPC solution. Our control algorithm is based on an event-triggered MPC, where we learn when the MPC should be re-computed, and a dual-mode MPC and linear state feedback control law applied in between MPC computations. We formulate a novel mixture-distribution RL policy determining the meta-parameters of our control algorithm and show that with joint optimization we achieve improvements that do not present themselves with univariate optimization of the same parameters. We demonstrate our framework on the inverted pendulum control task, reducing the total computation time of the control system by 36% while also improving the control performance by 18.4%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.