Abstract

Deep learning has shown promise in textile defect detection, but its reliance on large high-quality labeled datasets poses challenges in real-world industrial applications. This study presents a novel unsupervised defect detection framework that effectively detects various types of texture defects with limited defect-free texture samples. The framework integrates texture and semantic information using a bilateral-branch network architecture (TSUBB-Net). Specifically, TSUBB-Net employs a weighted centering loss to cluster complex texture units, emphasizing semantic information within defect regions through a channel attention mechanism. It further fuses contextual semantic information to achieve precise defect localization. Thus, the efficient fusion method combines texture and semantic information, enabling the representation of complex texture structures and mitigating the impact of image acquisition quality on defect recognition. To evaluate the effectiveness of our proposed method, we build a unique dedicated database of textile defect image segmentation, which serves as the benchmark for textile defect detection. Experimental results demonstrate that TSUBB-Net surpasses state-of-the-art methods, exhibiting excellent performance in textile defect detection. The proposed framework holds significant potential for practical applications in the textile industry, improving defect detection capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call