Abstract

Traditional federated learning methods assume that users have fully labeled data in their device for training, but in practice, labels are difficult to obtain due to various reasons such as user privacy concerns, high labeling costs, and lack of expertise. Semi-supervised learning has been introduced into federated learning scenarios to address the lack of labels, but performance suffers from slow training and non-convergence in real network environments. In this article, we propose Federated Incremental Learning (FedIL) as a semi-supervised federated learning (SSFL) framework in edge computing to overcome the limitations of SSFL. FedIL introduces a group-based asynchronous training algorithm with provable convergence, which accelerates model training by allowing more clients to participate simultaneously. We developed a prototype system and performed track-driven simulations to demonstrate FedIL's superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call