Automatic fabric defect detection systems improve the quality of textile production across the industry. To make these automatic systems accessible to smaller businesses, one potential solution is to use limited memory capacity chips that can be used with hardware platforms with limited resources. That is to say, the fabric defect detection algorithm must ensure high detection accuracy while maintaining a low computational cost. Therefore, we propose a wide-and-light network structure based on Faster R-CNN for detecting common fabric defects. We enhance the feature extraction capability of the feature extraction network by designing a dilated convolution module. In a dilated convolution module, a multi-scale convolution kernel is used to adapt to defects of different sizes. Dilated convolutions can increase receptive fields without increasing the number of parameters used. Therefore, we replace a subset of ordinary convolutions with dilated convolutions to learn target features and use convolution kernel decomposition and bottleneck methods to simplify the feature extraction networks. Then, high-level semantic features are fused with bottom-level detail features (via skip-connection) to obtain multi-scale fusion features. Finally, a series of anchor frames (of different sizes) is designed to suit multi-scale fabric defect detection. Experiments show that compared with various mainstream target detection algorithms, our proposed algorithm can improve the accuracy of fabric defect detection and reduce the size of the model.
Read full abstract