Abstract

Fabric defect detection plays an irreplaceable role in the quality control of the textile manufacturing industry, but it is still a challenging task due to the diversity and complexity of defects and environmental factors. Visual saliency models imitating the human vision system can quickly determine the defect regions from the complex texture background. However, most visual saliency-based methods still suffer from incomplete predictions owing to the variability of fabric defects and low contrast with the background. In this paper, we develop a context-aware attention cascaded feedback network for fabric defect detection to achieve more accurate predictions, in which a parallel context extractor is designed to characterize the multi-scale contextual information. Moreover, a top-down attention cascaded feedback module was devised adaptively to select the important multi-scale complementary information and then transmit it to an adjacent shallower layer to compensate for the inconsistency of information among layers for accurate location. Finally, a multi-level loss function is applied to guide our model for generating more accurate prediction results via optimizing multiple side-output predictions. Experimental results on the two fabric datasets built under six widely used evaluation metrics demonstrate that our proposed framework outperforms state-of-the-art models remarkably.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call