Abstract

In chemical processes, the safety constraints must be satisfied despite any uncertainties. Reinforcement learning is an algorithm that learns optimal control policies through interaction with the system. Recently, studies have shown that well-trained controllers can improve the performance of chemical processes, but the actual application requires additional schemes to satisfy the constraints. In our previous work, we proposed a model-based safe RL in which both state and input constraints can be considered by introducing barrier functions into the objective function. This study extends our previous model-based safe RL to consider the constraints with model-plant mismatches and stochastic disturbances. The Gaussian processes are employed to predict the expectation and variance of errors in constraints caused by uncertainties. Subsequently, these are further used to tighten the constraint by backoffs. With these adaptive backoffs, the safe RL can satisfy chance constraints and learn the optimal control policy of the uncertain nonlinear system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call