Abstract

Hierarchical federated learning (HFL), an emerging paradigm of the client-edge-cloud architecture, can effectively leverage nearby edge servers to conduct model aggregation, significantly reducing transmission overhead. HFL faces both technical and economic challenges: first, in the online setting, computation resources and network bandwidth can only reveal themselves when clients participate in HFL model training. Second, the model training process consumes substantial resources at the clients, such as energy, computation, and bandwidth. It is unrealistic to assume that all clients contribute their resources voluntarily. Thorough investigation is lacking for these challenges in existing HFL research. This work develops a novel online algorithm AUCS, based on the auction and combinatorial multi-armed bandit, to minimize the overall latency of HFL training. AUCS utilizes the ratio of upper confidence bound-based reward to the bid as a criterion for winner determination. Then, AUCS computes the key payment for each winner to guarantee truthfulness of the incentive mechanism. Theoretically, AUCS can achieve sub-linear regret, truthfulness, individual rationality and computational efficiency, and guarantees model convergence. Simulations on real-world datasets and training tasks demonstrate the advantages of AUCS in terms of training latency, model accuracy, and system efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call