Abstract

This paper discusses the problem of recognizing defective epoxy drop images for the purpose of performing vision-based die attachment inspection in integrated circuit (IC) manufacturing based on deep neural networks. Two supervised and two unsupervised recognition models are considered. The supervised models examined are an autoencoder (AE) network together with a multi-layer perceptron network (MLP) and a VGG16 network, while the unsupervised models examined are an autoencoder (AE) network together with k-means clustering and a VGG16 network together with k-means clustering. Since in practice very few defective epoxy drop images are available on an actual IC production line, the emphasis in this paper is placed on the impact of data augmentation on the recognition outcome. The data augmentation is achieved by generating synthesized defective epoxy drop images via our previously developed enhanced loss function CycleGAN generative network. The experimental results indicate that when using data augmentation, the supervised and unsupervised models of VGG16 generate perfect or near perfect accuracies for recognition of defective epoxy drop images for the dataset examined. More specifically, for the supervised models of AE+MLP and VGG16, the recognition accuracy is improved by 47% and 1%, respectively, and for the unsupervised models of AE+Kmeans and VGG+Kmeans, the recognition accuracy is improved by 37% and 15%, respectively, due to the data augmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call