Federated fault diagnosis has attracted increasing attention in industrial cloud–edge collaboration scenarios, where a ubiquitous assumption is that client models have the same architecture. Practically, this assumption cannot always be fulfilled due to requirements for personalized models, thereby resulting in the problem of model heterogeneity. Many approaches dealing with heterogeneous models tend to neglect the issue of representation bias, particularly in the context of non-identically and independently distributed data. In this article, to address the representation bias problem, Federated Model-Agnostic Knowledge Extraction (FedMAKE) is proposed. To bridge the information gap among clients, different from methods with public datasets, we initially develop two novel architecture-independent knowledge carriers. These carriers are derived based on the importance of process variables, without the need for additional datasets. Subsequently, we introduce a bi-directional distillation algorithm utilizing the two knowledge carriers. This algorithm facilitates the mutual transfer of knowledge embedded in carriers between a generative network and client models, thereby enabling the generation of fault data that is unbiased and well-balanced across categories. Furthermore, to mitigate the impact of statistical heterogeneity, we formulate a local objective for each client using two global knowledge carriers to guide local knowledge extraction and constrain client drift. Extensive experiments conducted on two prevalent industry datasets (TE and CWRU) illustrate that our proposed FedMAKE outperforms baseline methods. Specifically, FedMAKE enhances fault diagnosis accuracy by up to 11.7% on the TE dataset and up to 3.31% on the CWRU dataset compared to the sub-optimal method.
Read full abstract