Abstract

This article presents a deep learning scheme for automatic defect detection in material surfaces. The success of deep learning model training is generally determined by the number of representative training samples and the quality of the annotation. It is extremely tedious and tiresome to annotate defects pixel-by-pixel in an image to train a semantic network model for defect segmentation. In this study, we propose a two-stage deep learning scheme to tackle the pixel-wise defect detection in textured surfaces without manual annotation. The first stage of the deep learning scheme uses two cycle-consistent adversarial network (CycleGAN) models to automatically synthesize and annotate defect pixels in an image. The synthesized defect images and their corresponding annotated results from the CycleGAN models are then used as the input-output pairs for training the U-Net semantic network. The proposed scheme requires only a few real defect samples for the training and completely requires no manual annotation work. It is practical and computationally very efficient for the implementation in manufacturing. Experimental results show that the proposed deep learning scheme can be applied for defect detection in a variety of textured and patterned surfaces, and results in high detection accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.