Abstract

Federated learning (FL) has emerged as a key technology for enabling next-generation AI at scale. The classical FL systems use single-hop cellular links to deliver the local models from mobile workers to edge routers that then reach the remote cloud servers via high-speed Internet core for global model averaging. Due to the cost-efficiency, wireless multi-hop networks have been widely exploited to build communication backbones. Therefore, enabling FL over wireless multi-hop networks can make it accessible in a low-cost manner to everyone (e.g., under-developed areas and disaster sites). Wireless multi-hop FL, however, suffers from profound communication constraints including noisy and interference-rich wireless links, which results in slow and nomadic FL model updates. To address this, we suggest novel machine learning-enabled wireless multi-hop FL framework, namely FedAir, that can greatly mitigate the adverse impact of wireless communications on FL performance metrics such as model convergence time. This will allow us to fast prototype, deploy, and evaluate FL algorithms over ML-enabled, programmable wireless router (ML-router). The experiments on the deployed testbed validate and show that wireless multi-hop FL framework can greatly accelerate the runtime convergence speed of the de-facto FL algorithm, FedAvg.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call