Abstract

While deep neural network (DNN)-based fault diagnosis methods can monitor the faults that occur on the operating modes, they cannot perform well on the modes that are never experienced before. This limitation makes it challenging to ensure the production safety of chemical processes. In this article, this issue is formulated as domain generalization (DG), which aims to learn a universal fault diagnosis model from historical operating modes that can generalize well to unseen modes. Many existing DG approaches focus on learning a domain-invariant representation by aligning marginal distributions between domains, which ignore the conditional relationships and label information. Recently, some researches start to reduce the discrepancy of the class conditional distributions across domains, while the theoretical justifications of that are still missing. To address this issue, the theoretical analysis of DG is developed to investigate how to minimize the unseen domains’ risk, which is a theoretical guarantee that the DG methods can generalize well over unseen domains. This theory reveals that the unseen domain error can be bounded by the shift of the label and class conditional distributions across source domains. Then, this result motivates a novel labeling and class progressive adversarial learning (LCPAL) algorithm for fault diagnosis, which simultaneously controls the errors weighted based on label information and aligns the class conditional distributions between different historical operating modes, as well as reducing the adverse effect of the domain-specific feature. Empirical results on both the numerical example and the Tennessee Eastman process (TEP) demonstrate the effectiveness of the LCPAL approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call