Abstract

Fabric defect detection is one of the key steps in the textile manufacturing industry. Traditional saliency detection models mostly rely on hand-crafted features to obtain local details and global context. However, these methods ignore the association between context features. It restricts the ability to detect the salient objects in complex scenes. In this paper, a deep saliency detection model is proposed, which incorporates self-attention mechanism into convolutional neural network for fabric defect detection. First, a full convolutional network is designed for multi-scale feature maps to capture rich context features of the fabric image. Then, after the side output of the backbone network, the self-attention module is adopted to coordinate the dependencies between the features of the multiple layers, which improves the characterization ability of the extracted features. Finally, the multi-level saliency maps output from the self-attention mechanism are fused by the short connection structure, and generating detail enriched saliency map. Experiments demonstrate that the proposed method outperforms the state-of-the-art approaches when the defects are blurred or the shape is complex.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call