Abstract

Federated learning enables data owners to jointly train a neural network without sharing their personal data, which makes it possible to share sensitive data generated from various Industrial Internet of Things (IIoT) devices. However, in traditional federated learning, the user directly sends its parameters to the server, which increases the risk of privacy leakage. To solve this problem, several privacy-preserving solutions have been proposed. However, most of them either reduce model accuracy or increase computation and communication overhead. In addition, federated learning is still exposed to the risk of model tampering, which may impair model accuracy. In this paper, we propose PPTFL, a Privacy-Preserving and Traceable Federated Learning framework with efficient performance. Specifically, we first propose a Hierarchical Aggregation Federated Learning (HAFL) to protect privacy with low overhead, which is suitable for IIoT scenarios. Then, we combine federated learning with blockchain and IPFS, which makes the parameters traceable and tamper-proof. The extensive experiments demonstrate the practical performance of PPTFL.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.