Abstract

Accurate detection and classification of wafer defects constitute an important component in semiconductor manufacturing. It provides interpretable information to find the possible root causes of defects and to take actions for quality management and yield improvement. Traditional approach to classify wafer defects, performed manually by experienced engineers using computer-aided tools, is time-consuming and can be low in accuracy. Hence, automated detection of wafer defects using deep learning approaches has attracted considerable attention to improve the performance of detection process. However, a majority of these works have focused on defect classification and have ignored defect localization which is equally important in determining how specific process steps can lead to defects in certain locations. To address this, we evaluate the state-of-the-art You Only Look Once (YOLO) architecture to accurately locate and classify wafer map defects. Experimental results obtained on 19200 wafer maps show that YOLOv3 and YOLOv4, the variants of YOLO architecture, can achieve >94% of classification accuracy in real-time. For comparison, other architectures, namely ResNet50 and DenseNet121 are also evaluated for wafer defect classification and they give accuracies 89% and 92% respectively, however, without localization abilities. We find that the object detection methods are very useful in locating and classifying defects on semiconductor wafers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call