Abstract
As a decentralized training paradigm, Federated learning (FL) promises data privacy by exchanging model parameters instead of raw local data. However, it is still impeded by the resource limitations of end devices and privacy risks from the ‘curious’ cloud. Yet, existing work predominately ignores that these two issues are non-orthogonal in nature. In this article, we propose a joint design (i.e., AHFL) that accommodates both the efficiency expectation and privacy protection of clients towards high inference accuracy. Based on a cloud-edge-end hierarchical FL framework, we carefully offload the training burden of devices to one proximate edge for enhanced efficiency and apply a two-level differential privacy mechanism for privacy protection. To resolve the conflicts of dynamical resource consumption and privacy risk accumulation, we formulate an optimization problem for choosing configurations under correlated learning parameters (e.g., iterations) and privacy control factors (e.g., noise intensity). An adaptive algorithmic solution is presented based on performance-oriented resource scheduling, budget-aware device selection, and adaptive local noise injection. Extensive evaluations are performed on three different data distribution cases of two real-world datasets, using both a networked prototype and large-scale simulations. Experimental results show that AHFL relieves the end's resource burden (w.r.t. computation time 8.58% <inline-formula><tex-math notation="LaTeX">$\downarrow$</tex-math></inline-formula> , communication time 59.35% <inline-formula><tex-math notation="LaTeX">$\downarrow$</tex-math></inline-formula> and memory consumption 43.61% <inline-formula><tex-math notation="LaTeX">$\downarrow$</tex-math></inline-formula> ) and has better accuracy (6.34% <inline-formula><tex-math notation="LaTeX">$\uparrow$</tex-math></inline-formula> ) than 3 typical baselines under the limited resource and privacy budgets. The code for our implementation is available at <uri>https://github.com/Guoyeting/AHFL</uri> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Parallel and Distributed Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.