Abstract

This research focuses on applying reinforcement learning towards chemical plant control problems in order to optimize production while maintaining plant stability without requiring knowledge of the plant models. Since a typical chemical plant has a large number of sensors and actuators, the control problem of such a plant can be formulated as a Markov decision process involving high-dimensional state and a huge number of actions that might be difficult to solve by previous methods due to computational complexity and sample insufficiency. To overcome these issues, we propose a new reinforcement learning method, Factorial Kernel Dynamic Policy Programming, that employs 1) a factorial policy model and 2) a factor-wise kernel-based smooth policy update by regularization with the Kullback-Leibler divergence between the current and updated policies. To validate its effectiveness, FKDPP is evaluated via the Vinyl Acetate Monomer plant (VAM) model, a popular benchmark chemical plant control problem. Compared with previous methods that cannot directly process a huge number of actions, our proposed method leverages the same number of training samples and achieves a better control strategy for VAM yield, quality, and plant stability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.