Abstract

When deep learning is applied to intelligent textile defect detection, the insufficient training data may result in low accuracy and poor adaptability of varying defect types of the trained defect model. To address the above problem, an enhanced generative adversarial network for data augmentation and improved fabric defect detection was proposed. Firstly, the dataset is preprocessed to generate defect localization maps, which are combined with non-defective fabric images and input into the network for training, which helps to better extract defect features. In addition, by utilizing a Double U-Net network, the fusion of defects and textures is enhanced. Next, random noise and the multi-head attention mechanism are introduced to improve the model’s generalization ability and enhance the realism and diversity of the generated images. Finally, we merge the newly generated defect image data with the original defect data to realize the data enhancement. Comparison experiments were performed using the YOLOv3 object detection model on the training data before and after data enhancement. The experimental results show a significant accuracy improvement for five defect types – float, line, knot, hole, and stain – increasing from 41%, 44%, 38%, 42%, and 41% to 78%, 76%, 72%, 67%, and 64%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call