Abstract

Resistance spot welding (RSW) is widely used in manufacturing processes of automobiles, aircraft, high-speed trains and other equipment, its appearance quality not only affects the product appearance, but also reflects the internal defects of welding spots and the health statuses of welding equipment to a great extent. Nowadays, a few researchers have tried to use deep learning algorithms to detect the appearance qualities of welding spots, however, the relationships between defect types and welding spot positions are ignored and the subtle visual differences among different welding spots are not fully utilized. To this end, a fine-grained flexible graph convolution network (FFGCN) is proposed for vision inspection of resistance spot welds in this paper, which combines natural language processing with computer vision. Specifically, the prior knowledge of relationships between weld appearance qualities and their positions is mapped to point-wise space by knowledge graph, so the features in point-wise space can be mined by a flexible graph convolution network (FGCN). In the FGCN, multi-head attention mechanism is adopted to adaptively update the probabilistic matrix to generate multiple subgraphs, expand the spatial dimensions and increase feature information. Meanwhile, the optical features of weld images are excavated by a fine-grained network, in which the receptive field of model is enlarged and the number of pixels is ensured by dense atrous convolution. Besides, subtle visual differences between welding spots are captured by the bilinear attention convolution. Finally, the features in point-wise space and the visual features are combined by dot product to classify the appearances of welds. A six classification experiment for the vision inspection of resistance spot welds from engineering practice is implemented, results show that the proposed FFGCN has outstanding effect on the inspection of weld appearance qualities, it possesses faster convergence, more robustness, and its accuracy reaches 97.5%, which is higher than the commonly used visual classification algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call