Ultrasonic testing is a widely used non-destructive testing technique for precision forgings. However, assessing defects in ultrasonic B-scan images can be prone to errors, misses, and inefficiencies due to human judgment. To address these challenges, we propose a method based on deep learning to automate the evaluation of such images. We started by creating a dataset comprising 8000 images, each measuring 224 × 224 pixels. These images were cropped from ultrasonic B-scan images of 7 specimens, each featuring different sizes and locations of holes and crack defects. We then used state-of-the-art deep learning models to benchmark the dataset and identified YOLOv5s as the best-performing baseline model for our study. To address the challenges of deploying deep learning models and the issue of small defects being easily confused with the background in ultrasonic B-scan images, we made lightweight improvements to the deep learning model. Additionally, we enhanced the quality of data labels through data cleaning. Our experiments show that our method achieved a precision of 97.8%, a recall of 98.1%, mAP@0.5 of 99.0%, and mAP@.5:.95 of 67.6%, with a frames per second (FPS) of 74.5. Furthermore, the number of model parameters was reduced by 43.2%, while maintaining high detection accuracy. Overall, our proposed method offers a significant improvement over the original model, making it a more reliable and efficient tool for automated defect assessment in ultrasonic B-scan images.
Read full abstract