Abstract
Representing the uncertainties with a set of scenarios, the optimization problem resulting from a robust nonlinear model predictive control (NMPC) strategy at each sampling instance can be viewed as a large-scale stochastic program. This paper solves these optimization problems using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines. The control strategy is illustrated with a case study of a multidimensional unseeded batch crystallization process. For this application, a robust NMPC based on min–max optimization guarantees satisfaction of all state and input constraints for a set of uncertainty realizations, and also provides better robust performance compared with open-loop optimal control, nominal NMPC, and robust NMPC minimizing the expected performance at each sampling instance. The performance of robust NMPC can be improved by generating optimization scenarios using Bayesian inference. With the efficient parallel solver, the solution time of one optimization problem is reduced from 6.7 min to 0.5 min, allowing for real-time application.
Highlights
Nonlinear model predictive control (NMPC) is an advanced control technique based on an online solution of a nonlinear optimal control problem at each sampling instance using new measurements and updated state estimates
We assume that measurements of mean length (ML), aspect ratio (AR) and C are available and the measurement noise corresponding to ML, AR and C follows truncated normal distributions on the interval [−12 12] μm, [−0.2 0.2], and [−0.008 0.008] g/cm3
This paper solves the optimization problems arising from robust NMPC using the parallel algorithm developed to solve stochastic programs in distributed and shared memory machines
Summary
Nonlinear model predictive control (NMPC) is an advanced control technique based on an online solution of a nonlinear optimal control problem at each sampling instance using new measurements and updated state estimates. The most widely-studied approach is to solve a min–max optimization at each sampling instance to minimize the performance index of the worst-case while satisfying the state and input constraints for a set of uncertainty realizations [3]. In addition to the parallel solution of the KKT system, a scalable parallel algorithm requires parallel evaluations of the nonlinear programming (NLP) functions and gradients, and parallel implementations of all other linear algebra operations (e.g., vector-vector operations and matrix-vector multiplications). While the latter is easy for many parallel architectures, the former is not.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.