Federated learning (FL) has become a prevalent paradigm for training a model collaboratively on multiple clients with the coordination of a central server. As traditional FL suffers from client-drift due to data heterogeneity across clients, many personalized FL (PFL) techniques have been proposed. However, the issue of privacy leakage within PFL remains inadequately addressed. When incorporating differential privacy (DP) directly into PFL to provide rigorous privacy guarantees, it amplifies heterogeneity among clients and introduces high variance in uploaded information, significantly compromising model’s utility. In this paper, we propose a novel privacy-preserving PFL framework called Differentially Private Federated Elastic weight consolidation (DP-FedEwc), to achieve effective model personalization for each client under sample-level DP. We focus on a practical setting where the server is honest-but-curious. We first implement a FedEwc algorithm in a communication-efficient manner, and provide privacy guarantees by perturbing models and their parameter importance (PI). We show that FedEwc is robust to DP-introduced heterogeneity caused by noisy models, especially when the model is a deep neural network. Since excessive noise may render PI invalid, we present an Adaptive Parameter importance Perturbation (APP) method to adaptively add Gaussian noise to different coordinates of PI, thereby alleviating the negative effect of DP noise. Moreover, to accurately calibrate the privacy cost resulting from querying heterogeneous data across various clients when computing PI through APP, we adapt a Bayesian Accountant (BA) method to DP-FedEwc. We conduct experiments on standard benchmark datasets, and the experimental results confirm the superiority of DP-FedEwc over DP-PFL baselines.
Read full abstract