Abstract

Federated learning (FL), an emerging distributed machine learning (ML) technique, allows massive embedded devices and a server to work together for training a global ML model without collecting user data on a server. Most existing approaches adopt the traditional centralized FL paradigm with a single server: one is the cloud-centric FL paradigm and the other is the edge-centric FL paradigm. The cloud-centric FL paradigm is able to manage a large-scale FL system across massive user devices with high communication cost, whereas the edge-centric FL paradigm is capable of coordinating a small-scale FL system benefiting from the low communication delay over wireless networks. To fully exploit the advantages of both, in this paper, we develop a distinctive hierarchical FL framework for the promising mobile-edge cloud computing (MECC) system, called HELCHFL, to achieve high-efficiency and low-cost hierarchical FL training. In particular, we formulate the corresponding theoretical foundation for our HELCHFL to ensure hierarchical training performance. Furthermore, to address the inherent communication and user heterogeneity issues of FL training, our HELCHFL develops a utility-driven and heterogeneity-aware heuristic user selection strategy to enhance training performance and reduce training delay. Subsequently, by analyzing and utilizing the slack time in FL training, our HELCHFL introduces a device operating frequency determination approach to reduce training energy cost. Experiments demonstrate that our HELCHFL can enhance the highest accuracy by up to 52.93%, gain the training speedup of up to 483.74%, and obtain up to 45.59% training energy savings compared to state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call