Abstract
Process control is widely discussed in the manufacturing process, especially in semiconductor manufacturing. Due to unavoidable disturbances in manufacturing, different process controllers are proposed to realize variation reduction. Since Reinforcement Learning (RL) has shown great advantages in learning actions from interactions with a dynamic system, we introduce RL methods for process control and propose a new controller called RL-based controller. Considering the fact that most existing run-to-run (R2R) controllers mainly rely on a linear model assumption for the process input–output relationship, we first discuss theoretical properties of RL-based controllers based on the linear model assumption. Then the performance of RL-based controllers and traditional R2R controllers (e.g., Exponentially Weighted Moving Average (EWMA), double EWMA, adaptive EWMA, and general harmonic rule controllers) are compared for linear processes. Furthermore, we find that the RL-based controllers have potential advantages to deal with other complicated nonlinear processes. The intensive numerical studies validate the advantages of the proposed RL-based controllers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.