Abstract

This paper explores the use of state-of-the-art latent diffusion models, specifically stable diffusion, to generate synthetic images for improving the robustness of visual defect segmentation in manufacturing components. Given the scarcity and imbalance of real-world defect data, synthetic data generation offers a promising solution for training deep learning models. We fine-tuned stable diffusion using the LoRA technique on the NEU-seg dataset and evaluated the impact of different ratios of synthetic to real images on the training set of DeepLabV3+ and FPN segmentation models. Our results demonstrated a significant improvement in mean Intersection over Union (mIoU) when the training dataset was augmented with synthetic images. This study highlights the potential of diffusion models for enhancing the quality and diversity of training data in industrial defect detection, leading to more accurate and reliable segmentation results. The proposed approach achieved improvements of 5.95% and 6.85% in mIoU of defect segmentation on each model over the original dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.