Abstract

With the deployment of local differential privacy (LDP), federated learning (FL) has gained stronger privacy-preserving capability against inference-type attacks. However, existing LDP methods reduce global model performance. In this paper, we propose a QP-LDP algorithm for FL to obtain a better-performed global model without losing privacy guarantees defined by the original LDP. Different from previous LDP methods for FL, QP-LDP improves the global model performance by precisely disturbing the non-common components of quantized local contributions. In addition, QP-LDP comprehensively protects two types of local contributions. Through security analysis, QP-LDP provides the probability indistinguishability of clients' private local contributions at a component-level. More importantly, ingenious experiments show that with the deployment of QP-LDP, the global model outperforms that in the original LDP-based FL in terms of prediction accuracy and convergence rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.