Abstract

In the semiconductor industry, automated visual inspection aims to improve the detection and recognition of manufacturing defects by leveraging the power of artificial intelligence and computer vision systems, enabling manufacturers to profit from an increased yield and reduced manufacturing costs. Previous domain-specific contributions often utilized classical computer vision approaches, whereas more novel systems deploy deep learning based ones. However, a persistent problem in the domain stems from the recognition of very small defect patterns which are often in the size of only a few mu m and pixels within vast amounts of high-resolution imagery. While these defect patterns occur on the significantly larger wafer surface, classical machine and deep learning solutions have problems in dealing with the complexity of this challenge. This contribution introduces a novel hybrid multistage system of stacked deep neural networks (SH-DNN) which allows the localization of the finest structures within pixel size via a classical computer vision pipeline, while the classification process is realized by deep neural networks. The proposed system draws the focus over the level of detail from its structures to more task-relevant areas of interest. As the created test environment shows, our SH-DNN-based multistage system surpasses current approaches of learning-based automated visual inspection. The system reaches a performance (F1-score) of up to 99.5%, corresponding to a relative improvement of the system’s fault detection capabilities by 8.6-fold. Moreover, by specifically selecting models for the given manufacturing chain, runtime constraints are satisfied while improving the detection capabilities of currently deployed approaches.

Highlights

  • Introduction and motivationAutomated visual fault inspection processes involve the development and integration of systems for capturing and monitoring of manufacturing results

  • For our baseline approach of chip-based classification for flawless and faulty chips, we observe that the deployed models show F1-scores from 62.8% up to 95%, whereby we found that the lower scores belong to approaches such as linear discriminant (LDA) while the highest scores to approaches such as Extra Trees Classifier, Random Forest, or Multilayer Perceptron

  • The designed and implemented automated visual fault inspection system combines the advantages of classical image processing approaches with deep learning based ones in the form of a hybrid multistage system of stacked deep neural networks (SH-DNN)

Read more

Summary

Introduction and motivation

Automated visual fault inspection processes involve the development and integration of systems for capturing and monitoring of manufacturing results. We approach the problem of automated visual fault detection and recognition in the field of semiconductor manufacturing (Hooper et al 2015; Rahim and Mian 2017). Deep neural networks are able to either process the given input in its native image resolution or utilize a downsampled version of the input imagery Both options lead to problems in the considered application area. In an efficient process pipeline, the given time for one wafer is in the order of minutes while several thousands of single chip and street images need to be processed by the image processing system This corresponds to several tens of milliseconds per image, e.g., 6000 images to be processed within 5 min, resulting in 50 ms per image. More complex as well as visual attention inspired neurosciencerelated models are existing, inter alia, saliency models Itti et al (1998) or system-level attention models Hamker (2005)

Related work
Findings
Conclusion and outlook

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.