Accurately identifying penetration status is crucial for ensuring high-quality welding. However, relying on a single sensor to monitor the welding status falls short in accuracy. Thus, a novel monitoring method that fuses sound and image is proposed for assessing the laser welding penetration status. First, a microphone and a high-speed camera are employed to collect sound and image signals during welding. Second, to overcome noise challenges in sound signal, a denoising method is proposed that utilizes Grey Wolf Optimization (GWO) to find the optimal parameters for Variational Mode Decomposition (VMD). The time and frequency domain features from the denoised sound and Local Binary Pattern (LBP) features from the image are extracted. These features constitute a high-dimensional dataset, which is reduced using Principal Component Analysis (PCA). Finally, a back propagation neural network (BPNN) is constructed for fusion monitoring. The results demonstrate that fusion monitoring achieves higher accuracy (97.9%) compared to using sound (79.9%) or image (92.3%) features independently. In comparison to other methods like K-Nearest Neighbor (KNN), Naive Bayesian (NB), Support Vector Machine (SVM), and Decision Tree, BPNN fusion monitoring achieves higher accuracies, with increases of 18.09%, 12.79%, 6.53%, and 25.84% respectively. The proposed method helps to promote the development of welding intelligence.
Read full abstract