Abstract

Due to the suboptimal efficiency, accuracy, and increasing costs of manual defect detection in the textile industry, online visual inspection for fabric defects has emerged as an essential and promising research area. However, challenges such as the lack of defective samples and issues with industrial deployment still persist. This paper presents a novel defect detection technique based on deep learning, which primarily comprises two frameworks. First, we design an improved generative adversarial network with an encoder–decoder architecture to address the paucity of requisite defective samples. We use defect-free samples as input to the generator, ensuring that the generated defect samples maintain a similar pattern. We mitigate the vanishing gradient problem using Wasserstein distance as the loss function. Second, we enhance the Single Shot MultiBox Detector network by introducing Inception modules and feature fusion to detect defects across different scales. The AdaBound optimizer is selected to update the model parameters. We compare the proposed approach with other methods on self-generated fabric data sets that are partially produced by our generative adversarial network model. An online defect detection system is proposed to capture fabric images and evaluation in a production environment. Experiments demonstrate the superior performance of the proposed approach, achieving 97.5% accuracy in real time, making it well-suited for application in the industry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call