Abstract

Soft sensing of yarn quality is critical for process monitoring and quality control in smart manufacturing in the textile industry. However, current methods still suffer from limitations in terms of accuracy because they only consider fixed and singular inputs related to fiber properties and process parameters. This study proposes a complementary knowledge-augmented multimodal learning method that autonomously learns complementary knowledge from the assistant modality of yarn appearance images to compensate for the deviation of the model pretrained by the primary modality of fiber and process parameters. First, a pretraining network is established to extract the feature representations of the primary modality and preliminarily output the yarn quality indices. Next, a feature fusion mechanism composed mainly of a correlation evaluation matrix and fusion gate is designed based on expert knowledge to match the complementary features as a compensation to correct the preliminary outputs. The experimental results from the workshop demonstrate that the proposed method achieves an accuracy of at least 94.53% for the yarn appearance quality indicators. Compared with current yarn quality estimation methods, the proposed method significantly improves the accuracy by at least 2.89%. In particular, the accuracy improved by over 6.94% for the soft sensing of yarn neps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call