Abstract

Deep learning-based methods have demonstrated remarkable success in object detection tasks when abundant training data are available. However, in the industrial domain, acquiring a sufficient amount of training data has been a challenge. Currently, many synthetic datasets are created using 3D modeling software, which can simulate real-world scenarios and objects but often cannot achieve complete accuracy and realism. In this paper, we propose a synthetic data generation framework for industrial object detection tasks based on image-to-image translation. To address the issue of low image quality that can arise during the image translation process, we have replaced the original feature extraction module with the Residual Dense Block (RDB) module. We employ the RDB-CycleGAN network to transform CAD models into realistic images. Additionally, we have introduced the SSIM loss function to strengthen the network constraints of the generator and conducted a quantitative analysis of the improved RDB-CycleGAN-generated synthetic data. To evaluate the effectiveness of our proposed method, the synthetic data we generate effectively enhance the performance of object detection algorithms on real images. Compared to using CAD models directly, synthetic data adapt better to real-world scenarios and improve the model’s generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call