Abstract

Model predictive control (MPC) has established itself as the standard advanced process control method. However, solving the resulting optimization problems in real-time can be challenging, especially when uncertainty is explicitly considered in a robust nonlinear predictive control approach. An increasingly popular alternative to avoid the online solution of the resulting optimization problems is to approximate their solution using neural networks. The networks are trained using many solutions of the MPC problem for different system states and therefore this approach is often called imitation learning. Controllers obtained via imitation learning have important drawbacks, since it is difficult to learn behaviors that are not well represented in the data and they must be redesigned from scratch when the control task changes. In this work, we show that these two drawbacks can be mitigated by combining imitation learning and concepts from reinforcement learning. The central idea is to use imitation learning as a very good initialization of a control policy that is iteratively updated using reinforcement learning, taking advantage of the fact that an explicit and differentiable expression of the approximate MPC controller is available. The efficacy of the combination of the two learning paradigms is highlighted via simulations of a semi-batch industrial polymerization reactor.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.