Abstract

Federated learning (FL) is a promising training paradigm to achieve ubiquitous intelligence for future 6G communication systems. However, it is challenging to apply FL in 6G-enabled edge system since decentralized training consumes considerable energy and mobile devices are mostly battery-powered and resource-constrained. The intensive computation and communication cost of local updates accumulated by hundreds of global rounds bring about the energy bottleneck, which is exacerbated when the data is non identically and independently distributed (non-IID). To address this issue, we propose FedRelay, a generic multi-flow relay learning framework in which local updates are performed relay-by-relay in the training flow via model propagation. We also present a decentralized relay selection protocol that takes advantage of the diversity of cooperative communication networks. Following that, we investigate a FedRelay optimization problem to simultaneously minimize the energy consumption of local updates and alleviate the global non-IIDness. Technically, an approximation algorithm is proposed to jointly optimize computation frequency and transmission power, thus reducing the local training overhead. We further regulate the training topology of each flow by proposing a greedy relay policy that encourages effective information exchange among devices. Experiment results show that, compared to state-of-the-art federated learning algorithms, our learning framework can save up to 5 times the total energy required to achieve a reasonable global test accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call