In the field of fabric defect detection, the development of algorithms has been hindered by issues such as poor quality and limited quantity of open-source datasets. Traditional data augmentation methods offer limited improvements in model performance, while generative data augmentation methods are plagued by difficulties in training generative models, susceptibility to artifacts, and the need for re-labeling. To address these challenges, this paper proposes a blind super-resolution algorithm for fabric defect data augmentation. The model is based on Real-ESRGAN and has been optimized specifically for the resolution degradation module to better adapt to the resolution degradation process in fabric images. Subsequently, a novel loss function named Local Blur Discrimination Loss is designed to address the local blur phenomenon and suppress the generation of fabric artifacts during the super-resolution process. Finally, both subjective evaluations of super-resolution effects and objective comparisons of data augmentation performance were conducted during the experimental phase. The subjective assessments demonstrate that the proposed method outperforms the baseline model. Additionally, in terms of objective performance, augmenting the DAGM2007 dataset using the proposed model, the detection model's accuracy (P) increased by 7.4%, recall (R) increased by 1.0%, and the mean average precision (mAP) increased by 2.5%, surpassing commonly used traditional vision-based data augmentation algorithms.
Read full abstract