Abstract

The repetitive/periodic/batch process is widely used in modern industrial production. However, in the context of complex batch processes with nonlinear and non-repetitive nature, designing an effective control scheme is still a critical problem in theoretical research and practical application. In terms of the excellent performance of deep reinforcement learning (DRL) in dealing with the decision-making problems for complex dynamical systems and interacting without any requirement of prior knowledge of the processes, in this paper, we propose a model-free controller design scheme by using soft actor-critic (SAC), an advanced off-policy DRL algorithm. By properly designing the state information, the neural network structure of the policy, and the reward function, the SAC agent is trained as a nonlinear two-dimensional (2D) state feedback control to achieve high tracking performance and strong robustness for the nonlinear non-repetitive batch processes. Our simulation results demonstrate the proposed control method's effectiveness and applicability, and its significant performance is superior to the conventional iterative learning control (ILC) schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call