Abstract

In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call