Abstract

Process control is widely discussed in the manufacturing process, especially in semiconductor manufacturing. Due to unavoidable disturbances in manufacturing, different process controllers are proposed to realize variation reduction. Since Reinforcement Learning (RL) has shown great advantages in learning actions from interactions with a dynamic system, we introduce RL methods for process control and propose a new controller called RL-based controller. Considering the fact that most existing run-to-run (R2R) controllers mainly rely on a linear model assumption for the process input–output relationship, we first discuss theoretical properties of RL-based controllers based on the linear model assumption. Then the performance of RL-based controllers and traditional R2R controllers (e.g., Exponentially Weighted Moving Average (EWMA), double EWMA, adaptive EWMA, and general harmonic rule controllers) are compared for linear processes. Furthermore, we find that the RL-based controllers have potential advantages to deal with other complicated nonlinear processes. The intensive numerical studies validate the advantages of the proposed RL-based controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call