Abstract

The fault detection and classification (FDC) model is a prediction model that utilizes the sensor data of equipment to predict whether each wafer is faulty or not in the future, which is important to achieve a high yield and reduce the cost. To construct a high-performance FDC model with deep learning, a large amount of labeled training data is required. However, in real-world semiconductor manufacturing processes, the transition of recipe leads to a change in the distribution of input sensor data, which causes performance degradation for the existing FDC model. Model retraining for the new recipe is required, but a large time period is required to acquire a large amount of labeled data for the new recipe. In this study, an adaptive fault detection framework is proposed to minimize the performance degradation caused by the transition of recipe. In this framework, immediately after the recipe transition occurs, unsupervised adaptation is employed to reduce the performance degradation. After inspection results for some new recipe wafers are acquired, semi-supervised adaptation is employed to quickly recover the performance with a small amount of labeled data. Through experiments using real-world data, we demonstrate that the proposed framework can adapt to the new recipe with a reduced performance degradation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.