Abstract
Federated learning (FL) is a prominent distributed learning framework. The main barriers of FL include communication cost and privacy breaches. In this work, we propose a novel privacy-preserving second-order-based FL method, called GDP-LocalNewton. To improve the communication efficiency, we use Newton’s method to iterate and allow local computations before aggregation. To ensure strong privacy guarantee, we make use of the notion of differential privacy (DP) to add Gaussian noise in each iteration. Using advanced tools of Gaussian differential privacy (GDP), we prove that the proposed algorithm satisfies the strong notion of GDP. We also establish the convergence of our algorithm. It turns out that the convergence error comes from the local computation and Gaussian noise for DP. We conduct experiments to show the merits of the proposed algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have