Abstract

The use of machine learning theory for decision-making in manufacturing systems has become a convenient way to improve manufacturing efficiency. One task in this domain is to recognize girth weld defects based on ray images, and incorporating radiographic inspection industry specifications and the experiences of specialists in the model design process has become challenging. The emergence of image-text multimodal models confirms that natural language can supervise a much wider set of visual concepts; accordingly, the experiences of professionals in defect recognition work are summarized and generalized in this paper. Domain experience prototypes of different types of defects are established in text form, and the contrastive learning method of image and text feature matching is used to achieve girth weld defect recognition. Finally, the proposed method is validated on the radiographic inspection data of in-service pipelines, which demonstrates that the proposed method can be used to recognize defects according to the domain specification and produces better outcomes than common type recognition networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.