Abstract
ABSTRACT Fabric defect detection is crucial for quality control in fabric manufacturing but remains a challenge due to the multi-scale characteristics of defects and their integration with the fabric background. To address this, we propose an efficient fabric defect detection model, DGHR-YOLO. First, the deformable convolution block (DCB) is introduced into the backbone network, leveraging its dynamic receptive field to effectively capture defect morphology and enhance feature extraction. Secondly, we propose the GS-SPPF module, designed to mitigate semantic information loss, optimize feature fusion, and improve both accuracy and inference speed. Thirdly, a lightweight High-level Screening Feature Pyramid Network (HS-FPN) is introduced, enabling effective multi-scale feature fusion while maintaining low complexity. Finally, a one-shot aggregation module based on channel shuffle and re-parameterized convolution is introduced, enhancing feature interaction across scales. Experimental results on the Tianchi textile dataset demonstrate that DGHR-YOLO achieves mAP0.5 and mAP0.5:0.95 scores of 85.8% and 72.7%, with respective improvements of 2.7% and 3.8% over YOLOv8m, while maintaining low parameter count and computational complexity.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have