Abstract

The goal of process control is to maintain a process at the desired operating conditions. Disturbances, measurement uncertainties, and high-order dynamics in complex and highly integrated chemical processes pose a challenging control problem. Even though advanced process controllers, such as Model Predictive Control (MPC), have been successfully implemented to solve hard control problems, they are difficult to develop, rely on a process model, and require high performance computers and continuous maintenance. Reinforcement learning presents an appealing option for such complex systems, but little work has been done to apply reinforcement learning in chemical reactions with practical significance, to discuss the structure of the RL agent, and to evaluate the performance against benchmark measures. This work (1) applies a state-of-the-art reinforcement learning algorithm (DDPG) to a network of reactions with challenging dynamics and practical significance. (2) Disturbances and measurement uncertainties have been simulated. In addition, (3) we defined an observation space that is based on the working concept of a PID controller, optimized the reward function to achieve the desired controller performance, and evaluated the performance of the RL controller in terms of setpoint tracking, disturbance rejection, and robustness to parameter uncertainties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call