Abstract

Federated learning (FL) has been widely acknowledged as a promising solution to training machine learning (ML) model training with privacy preservation. To reduce the traffic overheads incurred by FL systems, edge servers have been included between clients and the parameter server to aggregate clients’ local models. Recent studies on this edge-assisted hierarchical FL scheme have focused on ensuring or accelerating model convergence by coping with various factors, e.g., uncertain network conditions, unreliable clients, heterogeneous compute resources, etc. This paper presents our three new discoveries of the edge-assisted hierarchical FL scheme: 1) it wastes significant time during its two-phase training rounds; 2) it does not recognize or utilize model diversity when producing a global model; and 3) it is vulnerable to model poisoning attacks. To overcome these drawbacks, we propose FedEdge, a novel edge-assisted hierarchical FL scheme that accelerates model training with asynchronous local federated training and adaptive model aggregation. Extensive experiments are conducted on two widely-used public datasets. The results demonstrate that, compared with state-of-the-art FL schemes, FedEdge accelerates model convergence by 1.14 × −3.20 ×, and improves model accuracy by 2.14% - 6.63%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.