Abstract

Weld penetration determines the integrity of the weld produced and must be controlled in automated welding. Due to the dramatic development of the neural networks, research has been done to use convolutional neural network (CNN) as a deep-learning model to automatically extract weld pool features from the weld pool image. However, for the deep learning to be effective, the raw information must contain such feature that correlate to the weld penetration. High dynamic range (HDR) cameras provide an effective to image the weld pool scene without being overshaded by the arc so that the rich information from the weld pool may be preserved. Unfortunately, limited studies have been done to extract possible rich information in HDR images and use the extracted relevant information/features to predict what are occurring underneath the work-piece, in particular when the weld pool is subject to dynamic change as during its feedback control. In this work, an HDR camera is used to capture the weld pool image from the topside. What occurs at the same time underneath the work-piece is captured by another camera aiming at the back-side surface of the weld pool forming the ground truth for training. A CNN network model is proposed to extract the relevant information from the rich information source/HDR top-side image and map to the label representing what occurs underneath the work-piece. To train the network, a series of experiments have been conducted with welding current and speed to change randomly, generating various weld pool images and backside bead widths/images in order to ensure the reliability and robustness of the trained network in a varying environment. With the analysis of the result, it is verified that the well-trained CNN network could improve the prediction result of the backside bead width.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call