Abstract

Reinforcement learning (RL) algorithms are a set of goal-oriented machine learning algorithms that can perform control and optimization in a system. Most RL algorithms do not require any information about the underlying dynamics of the system, they only require input and output information. RL algorithms can therefore be applied to a wide range of systems. This paper explores the use of a custom environment to optimize a problem pertinent to process engineers. In this study the custom environment is a continuously stirred tank reactor (CSTR). The purpose of using a custom environment is to illustrate that any number of systems can readily become RL environments. Three RL algorithms are investigated: deep deterministic policy gradient (DDPG), twin-delayed DDPG (TD3), and proximal policy optimization. They are evaluated based on how they converge to a stable solution and how well they dynamically optimize the economics of the CSTR. All three algorithms perform 98% as well as a first principles model, coupled with a non-linear solver, but only TD3 demonstrates convergence to a stable solution. While itself limited in scope, this paper seeks to further open the door to a coupling between powerful RL algorithms and process systems engineering.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.