Abstract

Federated learning (FL) as an emerging learning paradigm, has been achieved widespread attention since it allows distributed industrial agents to collaboratively develop a global model while keeping their data locally. Although various FL-based algorithms were proposed to solve engineering tasks in industrial cyber-physical systems, existing works rarely study a practical problem that the training samples collected by certain industrial agents (called unreliable industrial agents) may be of low quality. Obviously, the unreliable industrial agent would degrade the model accuracy. In this article, we propose a privacy-preserving momentum federated learning considering unreliable industrial agents, named DetectPMFL. In DetectPMFL, we design a detection method to alleviate the adverse effect of the unreliable agents. In addition, the privacy issues are analyzed by the mathematical description, especially for the convolution neural network. Based on this, Cheon-Kim-Kim-Song (CKKS) homomorphic encryption is used to protect the private information of the agents. The proposed approach is evaluated by two common datasets for recognition tasks. The security analysis and experiment results indicate that DetectPMFL is robust against unreliable industrial agents, and achieves high accuracy while preserving privacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call