Abstract Federated learning (FL) as a privacy-preserving technology enables multiple clients to collaboratively train models on decentralized data. However, transmitting model parameters between local clients and the central server can potentially result in information leakage. Differentially private federated learning (DPFL) has emerged as a promising solution to enhance privacy. Nevertheless, existing DPFL schemes suffer from two issues: (i) most schemes that aim to achieve desired model accuracy may incur a high privacy budget. (ii) several schemes that consider the trade-off between privacy and accuracy by utilizing linear clipping bound may distort numerous model parameters. In this paper, we first propose FDP-FL, a flexible differential privacy approach for FL. FDP-FL introduces a novel series sum privacy budget allocation instead of uniform allocation and enables adaptive and nonlinear noise scale decay. In this way, a tight bound for cumulative privacy loss can be achieved while optimizing model accuracy. Then in order to mitigate gradient leakages caused by honest-but-curious clients and server, we further design client-level FDP-FL and record-level FDP-FL, respectively. Experimental results demonstrate that our FDP-FL improves model accuracy by $\sim $13.3% compared with the basic DP-FL under a fixed privacy budget and outperforms existing trade-off schemes with the same hyperparameter setting.
Read full abstract