Abstract

Federated learning (FL) enables large-scale machine learning with user data privacy due to its decentralized structure. However, the user data can still be inferred via the shared model updates. To strengthen the privacy, we consider FL with local differential privacy (LDP). One of the challenges in FL is its huge communication cost caused by iterative transmissions of model updates. It has been relieved by quantization in the literature, however, there have been not many works that consider its effect on LDP and the unboundedness of the randomized model updates. We propose a communication-efficient FL algorithm with LDP that uses a Gaussian mechanism followed by quantization and the Elias-gamma coding. A novel design of the algorithm guarantees LDP even after the quantization. Under the proposed algorithm, we provide a trade-off analysis of privacy and communication costs theoretically: quantization reduces the communication costs but requires a larger perturbation to enable LDP. Experimental results show that the accuracy is mostly affected by the noise from LDP mechanisms, and it becomes enhanced when the quantization error is larger. Nonetheless, our experimental results enabled LDP with a significant compression ratio and only a slight reduction of accuracy in return. Furthermore, the proposed algorithm outperforms another algorithm with a discrete Gaussian mechanism under the same privacy budget and communication costs constraints in the experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call