Under-extrusion poses a common challenge in filament 3D printing, necessitating precise adjustments to printing parameters for optimal resolution. Swift identification of this flaw is crucial for implementing timely corrective measures. In response, we present a novel machine learning approach designed to detect anomalies in 3D printing, specifically in the fused filament fabrication process (FFF). Our framework utilizes “You Only Look Once” (YOLO), a real-time object detection system, and “Visual Geometry Group-16” (VGG-16), a Convolutional Neural Network (CNN) model for image recognition, to accurately identify and localize under-extrusion events. Initially, models such as VGG-16, VGG-19, and ResNet-50 were trained without the inclusion of YOLO to establish baseline accuracies. Subsequently, an automated image pre-processing phase employs YOLO to discern the nozzle head, facilitating subsequent cropping around the region of interest for retraining the models. This inclusion of the nozzle head detection approach significantly improved the accuracy of all models, with the combined application of YOLO and VGG-16 demonstrating the most substantial enhancement, boosting detection accuracy to 97%. Moreover, Gradient-weighted Class Activation Mapping (Grad-CAM) technique, another CNN-based method, is employed to effectively highlight predicted areas in under-extrusion scenarios. While our method significantly advances the automatic detection of printing anomalies, it primarily serves as a diagnostic tool, signaling the need for intervention. Our rigorous testing on images of varying complexities confirms the model’s robustness, evaluated using comprehensive metrics such as precision, recall, and F1 score.
Read full abstract