Abstract

AbstractNonlinear high‐dimensional distributed parameter systems (DPSs) described by sets of parabolic partial different equations (PDEs) exhibit a dominant, low‐dimensional slow behavior that can be captured using model reduction. A time–space‐coupled model reduction architecture combining encoder–decoder networks with recurrent neural networks (RNNs) was presented in our previous work, for modeling the spatiotemporal dynamics of DPSs without recourse to the governing equations. In this work, we further understand the stability of the training dynamics of the deep architecture by using the Lyapunov exponents (LEs). Subsequently, we construct nonlinear model predictive control (MPC) formulations for the DPS based on the learned, dimensional‐reduced model. We use a path‐integral optimal control algorithm for MPC implementation to avoid any analytic derivatives of the dynamics. The effectiveness of integration of a deep neural network‐based model with MPC is demonstrated in a tubular reactor with recycle cases. The results of the simulation also show that the LE can serve as a readout of training stability for the learned dynamical model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call