Abstract

To preserve participants' privacy, Federated Learning (FL) has been proposed to let participants collaboratively train a global model by sharing their training gradients instead of their raw data. However, several studies have shown that con-ventional FL is insufficient to protect privacy from adversaries, as even from gradients, useful information can still be recovered. To obtain stronger privacy protection, Differential Privacy (DP) has been proposed on the server's side and the clients' side. Although adding artificial noise to the raw data can enhance users' privacy, the accuracy performance of the FL is inevitably degraded. In addition, although the communication overhead caused by the FL is much smaller than that of centralized learning, it still becomes a bottleneck of the learning performance and utilization efficiency due to its frequent parameters exchange. To tackle these problems, we propose a new FL framework via applying DP both locally and centrally in order to strengthen the protection of par-ticipants' privacy. To improve the accuracy performance of the model, we also apply sparse gradients and Momentum Gradient Descent on the server's side and the clients' side. Moreover, using sparse gradients can reduce the total communication costs. We provide the experiments to evaluate our proposed framework and the results show that our framework not only outperforms other DP-based FL frameworks in terms of the model accuracy but also provides a more powerful privacy guarantee. Besides, our framework can save up to 90% of communication costs while achieving the best accuracy performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call