Abstract

The fusion of federated learning and differential privacy can provide more comprehensive and rigorous privacy protection, thus attracting extensive interests from both academia and industry. However, facing the system-level challenge of device heterogeneity, most current synchronous FL paradigms exhibit low efficiency due to the straggler effect, which can be significantly reduced by Asynchronous FL (AFL). However, AFL has never been comprehensively studied, which imposes a major challenge in the utility optimization of DP-enhanced AFL. Here, theoretically motivated multi-stage adaptive private algorithms are proposed to improve the trade-off between model utility and privacy for DP-enhanced AFL. In particular, we first build two DP-enhanced AFL frameworks with consideration of universal factors for different adversary models. Then, we give a solid analysis on the model convergence of AFL, based on which, DP can be adaptively achieved with high utility. Through extensive experiments on different training models and benchmark datasets, we demonstrate that the proposed algorithms achieve the overall best performances and improve up to 24% test accuracy with the same privacy loss and have faster convergence compared with the state-of-the-art algorithms. Our frameworks provide an analytical way for private AFL and adapt to more complex FL application scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call