Remanufacturing of mechanical parts has recently gained much attention due to the rapid development of green technologies and sustainability. Recent efforts to automate the inspection step in the remanufacturing process using artificial intelligence are noticeable. In this step, a visual inspection of the end-of-life (EOL) parts is carried out to detect defective regions for restoration. This operation relates to the object detection process, a typical computer vision task. Many researchers have adopted well-known deep-learning models for the detection of damage. A common technique in the object detection field is transfer learning, where general object detectors are adopted for specific tasks such as metal surface defect detection. One open-sourced model, YOLOv7, is known for real-time object detection, high accuracy, and optimal scaling. In this work, an investigation into the YOLOv7 behavior on various public metal surface defect datasets, including NEU-DET, NRSD, and KolektorSDD2, is conducted. A case study validation is also included to demonstrate the model’s application in an industrial setting. The tiny variant of the YOLOv7 model showed the best performance on the NEU-DET dataset with a 73.9% mAP (mean average precision) and 103 FPS (frames per second) in inference. For the NRSD dataset, the model’s base variant resulted in 88.5% for object detection and semantic segmentation inferences. In addition, the model achieved 65% accuracy when testing on the KolektorSDD2 dataset. Further, the results are studied and compared with some of the existing defect detection models. Moreover, the segmentation performance of the model was also reported.
Read full abstract