Abstract

The detection and location of yarn-dyed fabric defects is a crucial and challenging problem in actual production scenarios. Recently, unsupervised fabric defect detection methods based on convolutional neural networks have attracted more attention. However, the convolutional neural networks often neglect to model the global receptive field of images, which further influence the defect detection ability of the model. In this article, we propose a U-shaped Swin Transformer network based on Quadtree attention framework for unsupervised yarn-dyed fabric defect detection. The method via U-shaped network based on Swin Transformer, the Swin Transformer adopts local attention to effectively learn features, and the U-shaped network realizes pixel-level reconstruction of images. The Quadtree attention is used to effectively capture the global features of the image, and model the global receptive field, and then better reconstruct the yarn-dyed fabric image. The improved Euclidean residual enhances the detection ability of unobvious defects, and obtains the final defect detection results. The proposed method effectively avoids the difficulty of collecting a large number of defective samples and manual labeling. Our method obtains 51.34% F1 and 38.30% intersection over union on the YDFID-1 dataset. Experimental results show that the proposed method can achieve higher accuracy of fabric defect detection and location compared with other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call