Abstract

Federated Learning (FL) has been widely used for distributed machine learning in edge computing. The centralized aggregation server in FL is inclined to be the communication bottleneck and single point of failure. To solve the drawback, hierarchical model training frameworks like Hierarchical Federated Learning (HFL) and E-Tree learning have been proposed. Existing works lack solutions to quantitatively determine the aggregation frequencies of the edge devices at various layers for a hierarchical model training framework. Thus, in this paper, we propose a resource-based aggregation frequency controlling method, termed RAF, which determines the optimal aggregation frequencies of edge devices to minimize the loss function according to heterogeneous resources. Our proposed method can alleviate the waiting time and fully utilize the resources of the edge devices. Besides, RAF dynamically adjusts the aggregation frequencies at different phases during the model training to achieve fast convergence speed and high accuracy. We evaluated the performance of RAF via extensive experiments with real datasets on our self-developed edge computing testbed. Evaluation results demonstrate that RAF outperforms the benchmark approaches in terms of learning accuracy and convergence speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call