Abstract

Federated learning (FL) is a promising distributed learning paradigm, which can effectively avoid the privacy leakage and communication issues compared with the centralized learning. Specifically, in each training iteration, FL nodes only upload the local training results to the centralized server without disclosure of their raw training dataset and the centralized server will aggregate the local results of all FL nodes and update the global model. To this end, the performance of the global model is highly dependent on the nodes' cooperation. However, it is challenging to motivate mobile edge devices to involve themselves in the FL process without a desired incentive. Another significant concern of the mobile edge devices is the communication and computational energy cost of participation. Therefore, considering the high cost and weak communication channel with the centralized server specially for the distant nodes, in this paper, we propose a relay-assisted energy efficient scheme for federated learning, where each FL computational node is not only motivated by monetary awards based on their local dataset, but also further motivated to function as a relay node to assist distant nodes on local results uploading due to its locality advantage. To achieve a stable pairing solution between FL computational nodes and assisted relays in a distributive fashion, a many-to-one matching algorithm is applied, where each the computational node and relay is unable to deviate with current pairing unilaterally for higher revenue. Extensive simulations are conducted to illustrate the correctness and effectiveness of our proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call