Abstract

Recurrent neural networks (RNNs) have been widely used to model nonlinear dynamic systems using time-series data. While the training error of neural networks can be rendered sufficiently small in many cases, there is a lack of a general framework to guide construction and determine the generalization accuracy of RNN models to be used in model predictive control systems. In this work, we employ statistical machine learning theory to develop a methodological framework of generalization error bounds for RNNs. The RNN models are then utilized to predict state evolution in model predictive controllers (MPC), under which closed-loop stability is established in a probabilistic manner. A nonlinear chemical process example is used to investigate the impact of training sample size, RNN depth, width, and input time length on the generalization error, along with the analyses of probabilistic closed-loop stability through the closed-loop simulations under Lyapunov-based MPC.

Highlights

  • Modeling large-scale, complex nonlinear processes has been a long-standing research problem in process systems engineering

  • In this work, we develop the methodological framework of generalization error bounds from machine learning theory for the development and verification of Recurrent neural networks (RNNs) models with specific theoretical accuracy guarantees and integrate these models into model predictive control system design for nonlinear chemical processes

  • We developed a generalization probabilistic error bound for RNN models by taking advantage of the Rademacher complexity method for vector-valued functions

Read more

Summary

Introduction

Modeling large-scale, complex nonlinear processes has been a long-standing research problem in process systems engineering. Generalization error bound is a common methodology in statistical machine learning for evaluating the predictive performance of machine learning algorithms [33] This bound depends on a number of factors such as the number of data samples, the number of layers and neurons, bounds of weight matrices, initialization method, among others. In this work, we develop the methodological framework of generalization error bounds from machine learning theory for the development and verification of RNN models with specific theoretical accuracy guarantees and integrate these models into model predictive control system design for nonlinear chemical processes. Closed-loop simulations are carried out to analyze the probabilistic closed-loop stability and performance

Notation
Class of Systems
Recurrent Neural Network Model
RNN Generalization Error
Preliminaries
Rademacher Complexity Bound
RNN-Based MPC with Probabilistic Stability Analysis
Lyapunov-Based Control Using RNN Models
Stabilization of Nonlinear System under Lyapunov-Based Controller
Lyapunov-Based MPC Using RNN Models for Nonlinear Systems
Application to a Chemical Process Example
RNN Generalization Performance
Case Study 1
Case Study 2 : RNN Depth and Width
Case Study 3
Case Study 4
Case Study 5
Closed-Loop Performance Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call