Abstract

Weld bead geometry visually represents the result of the welding process. However, ensuring a consistent weld bead is not always guaranteed due to the instability of the welding process. In this study, we present a deep learning-based monitoring system that predicts both the top and bottom bead widths simultaneously during the multi-mode fiber laser welding of aluminum alloy 1050P-H16. From the predicted top and bottom bead widths, rich information about the welding process can be obtained. Our deep learning model was constructed based on the VoVNet27-slim architecture, which was successfully trained using weld pool images obtained coaxially by a small optical camera. We were able to obtain very clean images of the aluminum weld pool, which provided plentiful information about the weld pool and the resulting top and bottom bead widths. We attempted to use one or two weld pool images as input to study how an additional weld pool image can enhance prediction accuracy. It was found that the top bead width was accurately predicted by both the one- and two-image models. However, the two-image model showed a clear improvement in the prediction of the bottom bead width because the bottom bead width fluctuates more widely and cannot be directly seen from the top-side weld pool image. The optimal separation distance between the two input images was found to be −0.1 mm, with which additional weld pool information about the past was supplied to the model and the denoising effect was achieved. Monitoring changes in both top and bottom bead widths provides rich information regarding the welding process, and we believe that the presented deep learning–based approach can serve as an effective monitoring tool.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call