Abstract

Abstract Federated learning (FL) often uses local differential privacy (LDP) to prevent leaking data privacy through gradients. However, due to the high dimension of gradients, LDP will encounter the problem of privacy budget explosion in the application, resulting in low accuracy of the training model. To overcome this shortcoming, we propose a differential privacy FL protocol incorporating a control matrix and double shuffles. The control matrix, generated by the analyzer, is responsible for governing the selection and upload of clients’ gradients. Double shufflers shuffle the control matrix and clients’ gradients, respectively, so that the control matrix is invisible to the server and the gradient is anonymous to the server. In addition, the existing differential private FL often uses the same clipping scale for gradients clipping to facilitate determining the noise scale. However, this will bring too many clipping errors for the large gradients and too many noise errors for the small ones. To solve these problems, we propose an adaptive clipping scheme. Experiments on the real-world datasets show that our proposed methods achieve higher testing accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.