Early detection of defects, such as keyhole pores and cracks is crucial in laser-directed energy deposition (L-DED) additive manufacturing (AM) to prevent build failures. However, the complex melt pool behaviour cannot be adequately captured by conventional single-modal process monitoring approaches. This study introduces a multisensor fusion-based digital twin (MFDT) for localized quality prediction in the robotic L-DED process. The data used in multisensor fusion includes features extracted from a coaxial melt pool vision camera, a microphone, and an off-axis short wavelength infrared thermal camera. The key novelty of this work is a spatiotemporal data fusion method that synchronizes multisensor features with the real-time robot motion data to achieve localized quality prediction. Optical microscope (OM) images of the printed part are used to locate defect-free and defective regions (i.e., cracks and keyhole pores), which serve as ground truth labels for training supervised machine learning (ML) models for quality prediction. The trained ML model is then used to generate a virtual quality map that registers quality prediction outcomes within the 3D volume of the printed part, thus eliminating the need of physical inspections by destructive methods. Experiments show that the virtual quality map closely matches the actual quality observed by OM. Compared to traditional single-sensor-based quality prediction, the MFDT has achieved a significantly higher quality prediction accuracy (96%), a higher ROC-AUC score (99%), and a lower false alarm rate (4.4%). As a result, the MFDT is a more reliable method for defect prediction. The proposed MFDT also lays the groundwork for our future development of a self-adaptive hybrid processing strategy that combines machining with AM for defect removal and quality improvement.
Read full abstract