In recent years, machine learning has been challenged by the growing concerns regarding data privacy. This has led to the emergence of federated learning, which aims to train a model across distributed clients without sharing their data, thereby resolving the data privacy issues. However, this scheme may not generalize well to the heterogeneous data of distributed clients, particularly in industrial applications. This has motivated the development of personalized approaches for preserving privacy. Therefore, in this study, we introduced an index called gradient divergence to verify heterogeneity based on federated learning, which was adopted to adjust the aggregation weight. A personalized federated learning with privacy preserving algorithm called personalized federated stochastic gradient descent (P-PFedSGD) was developed to improve the performance of federated learning on local datasets while maintaining the performance on other client datasets. P-PFedSGD allows clients to transmit the local gradient instead of the local model. This approach utilizes the advantages of both personalized and federated learning, while preserving data privacy. The developed P-PFedSGD algorithm was applied to tool-wear estimation to demonstrate its performance and effectiveness. The results showed that the developed approach has more functionality than other algorithms to overcome the challenges of federated learning, such as communication cost reduction and computation reduction for clients.
Read full abstract