Abstract

Reinforcement learning (RL) algorithms are a set of goal-oriented machine learning algorithms that can perform control and optimization in a system. Most RL algorithms do not require any information about the underlying dynamics of the system, they only require input and output information. RL algorithms can therefore be applied to a wide range of systems. This paper explores the use of a custom environment to optimize a problem pertinent to process engineers. In this study the custom environment is a continuously stirred tank reactor (CSTR). The purpose of using a custom environment is to illustrate that any number of systems can readily become RL environments. Three RL algorithms are investigated: deep deterministic policy gradient (DDPG), twin-delayed DDPG (TD3), and proximal policy optimization. They are evaluated based on how they converge to a stable solution and how well they dynamically optimize the economics of the CSTR. All three algorithms perform 98% as well as a first principles model, coupled with a non-linear solver, but only TD3 demonstrates convergence to a stable solution. While itself limited in scope, this paper seeks to further open the door to a coupling between powerful RL algorithms and process systems engineering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call