Abstract

The success of deep-learning-based fault diagnosis methods usually relies on a large amount of high-quality data annotation, which is difficult or/and expensive to obtain in many practical industrial applications. When label noise exists, the performance of these methods will be severely affected. Self-training-based label correction is a possible approach to address the label-noise problem. It uses the pseudolabels produced by deep neural network (DNN) to gradually replace the original noisy labels, thereby reducing the impact of label noise. However, due to lack of guidance, self-training methods are often prone to confirmation bias, which can lead to many incorrect pseudolabels. In this work, we aim to address the confirmation bias issue of self-training in the label-noise scenario, so as to obtain better label correction results. A meta-self-training method is proposed, which adopts a self-training mechanism to train a teacher network and uses the pseudolabels generated by the teacher to train a student network. On this basis, the test loss of the student on a small amount of clean data is used to measure the quality of these pseudolabels, and a meta-learning framework is developed to feed this test loss back to the teacher to guide the generation of better pseudolabels. Experimental results show that the proposed method can effectively improve the label correction results of self-training, so that both teacher and student models can achieve good performance in label-noise fault diagnosis problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call