In almost all industrial processes, the development of an accurate mathematical model that comprehensively characterizes the regulated system poses a substantial obstacle. Diverse factors, such as unidentified system components and time-varying external disturbances, contribute to this phenomenon. This can result in a disparity between the model and the plant and thereby compromising the performance of the designed controller. In this study, a novel data-driven ensemble optimal compensation control framework is proposed for partially known nonlinear systems with state delays and structural disturbances. Initially, the estimated system model is leveraged to construct a robust fuzzy model predictive controller (RFMPC) based on the Lyapunov–Krasovskii functional (LKF), ensuring the input-to-state stability (ISS) of the plant. Then, a reinforcement learning (RL)-based compensator is assembled to gradually minimize the model-plant mismatch across the entire state space. To streamline the training process of the compensator and facilitate its stability, a dynamic network update scheme is devised, incorporating a moving average gradient calculation method and a learning rate tuning strategy. Furthermore, comparative discussions with some prevalent methods are demonstrated. With the established control framework, the performance of the RFMPC can be securely enhanced, and convergence is ensured within a narrow proximity to equilibrium, despite time-varying perturbation dynamics. By integrating this approach with deep learning (DL) methods, the efficacy of numerous industrial processes could be significantly improved. A numerical simulation and a discretized continuous-time stirring tank reactor (CSTR) example validate the effectiveness and benefits of the proposed control strategy.
Read full abstract