Abstract
Federated learning (FL) combined with local differential privacy (LDP) has attracted considerable attention due to its privacy-preserving capability against inference-type attacks, e.g., model inversion attacks and membership inference attacks. However, the noise introduced by LDP reduces the global model performance, while decreasing the noise by setting a larger privacy budget sacrifices the privacy guarantees. In this paper, we propose a layer-wise LDP for the FL system, dubbed LLDP, which disturbs various layers of a local model according to clients’ self-assigned privacy budgets. With the deployment of LLDP, clients could train a highly accurate and rapid-converged global model without loosing privacy guarantees. Through extensive security analyses, the proposed LLDP scheme helps the entire local model achieve (ε,δ)-LDP, and the probability indistinguishability of the local model is achieved under the widespread semi-honest threat model. Ingenious experiments show that LLDP improves the global model prediction and convergence rate by 3.38% and 4.76% on the CIFAR-10 dataset compared to the state-of-the-art LDP method with the same privacy budget. In addition, given the same training target (loss value), LLDP requires a 26.67% lower privacy budget, providing stronger privacy guarantees against model inversion attacks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.