Abstract

AbstractAs an important approach to overcome data silos and privacy concerns in deep learning, federated learning, which can jointly train the global model and keep data local, has shown remarkable performance in a range of industrial applications. However, federated learning still suffers from the problem that shared gradients may be subject to tampering, inference functions, and falsification. To address this issue, we propose a verifiable federated learning framework to deal with malicious aggregators. Initially, we propose a reputation calculation mechanism to solve the problem of selecting a reliable aggregator based on a multiweight subjective logic model. Furthermore, we design a verifiable federated learning scheme to ensure data confidentiality, integrity, and verifiability, as well as support the client's dynamic withdrawal. Security analyses indicate that our framework is secure against malicious adversaries. Furthermore, experimental results on real datasets show that our verifiable federated learning has high accuracy and feasible efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call