Abstract

Federated learning (FL) is a kind of privacy-aware machine learning, in which the machine learning models are trained on the users side and then the model updates are transmitted to the server for aggregating. As the data owners need not upload their data, FL is a privacy-persevering machine learning model. However, FL is weak as it suffers from a reverse attack, in which an adversary can get users data by analyzing the user uploaded model. Motivated by this, in this paper, based on the secret sharing, we design EPPDA, an efficient privacy-preserving data aggregation mechanism for FL, to resist the reverse attack, which can aggregate users trained models secretly without leaking the users model. Moreover, EPPDA has efficient fault tolerance [1] for the user disconnection. Even if a large number of users are disconnected when the protocol runs, EPPDA will execute normally. Analysis shows that the EPPDA can provide a sum of locally trained models to the server without leaking any single users model. Moreover, adversary can not get any non-public information from the communication channel. Efficiency verification proves that the EPPDA not only protects users privacy but also needs fewer computing and communication resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call