Abstract

Federated learning (FL) is a distributed learning approach, which allows the distributed computing nodes to collaboratively develop a global model while keeping their data locally. However, the issues of privacy-preserving and performance improvement hinder the applications of the FL in the industrial cyber-physical systems (ICPSs). In this work, we propose a privacy-preserving momentum FL approach, named PMFL, which uses the momentum term to accelerate the model convergence rate during the training process. Furthermore, a fully homomorphic encryption scheme CKKS is adopted to encrypt the gradient parameters of the industrial agents’ models for preserving their local privacy information. In particular, the cloud server calculates the global encrypted momentum term by utilizing the encrypted gradients based on the momentum gradient descent optimization algorithm (MGD). The performance of the proposed PMFL is evaluated on two common deep learning datasets, i.e., MNIST and Fashion-MNIST. Theoretical analysis and experiment results confirm that the proposed approach can improve the convergence rate while preserving the privacy information of the industrial agents.

Highlights

  • Industrial cyber-physical system (CPS) is an emergent technology that focuses on the integration of computational applications with physical devices [1,2,3]

  • The workflow of the preserving momentum federated learning (PMFL) comprises three phases: system initialization, local model training by industrial agents, and model parameter aggregation executed by the cloud server

  • We present a privacy-preserving momentum federated learning (PMFL) for industrial cyber-physical systems (ICPSs)

Read more

Summary

Introduction

Industrial cyber-physical system (CPS) is an emergent technology that focuses on the integration of computational applications with physical devices [1,2,3]. Linlin Zhang and Zehui Zhang contributed to this work. To address this problem, the federated learning (FL) approach is proposed to control multiple training participants to collaboratively train a global model [14, 15]. In the FL system, the training participants only share the gradients of their local models to the cloud server instead of raw data. Since the FL can effectively solve data island issues, it has aroused widespread concern in many industrial fields. The authors utilize the FL technologies to meet the requirements of distributed computing and

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.