Abstract

As an enabler of sixth-generation communication technology (6G), Federated Learning (FL) triggers a paradigm shift from "connected things" to "connected intelligence". FL implements on-device learning, where massive end devices jointly and locally train a model without private data leakage. However, FL suffers from problems of low accuracy and convergence rate when no data is shared to the central server and the data distribution is non-IID. In recent years, attempts have been made on hybrid FL, where very small amounts of data (e.g., less than 1%) is shared from the participants. With the opportunities brought by shared data, we notice that the server is capable of receiving the data in order to assist the FL process and mitigate the challenge of non-IID. Notably, existing hybrid FL only applies the model-level technologies belonging to the traditional FL and does not make full use of the characteristics of shared data to make targeted improvements. In this paper, we propose FedAux, a novel hybrid FL method at knowledge-level, which utilizes shared data to construct an auxiliary model and then transfer general knowledge to traditional aggregated model or client model for enhancing the accuracy of global model and speeding up the convergence of global model. We also propose two specific knowledge transfer strategies named c-transfer and i-transfer. We conduct extensive analysis and evaluation of our methods against the well-known FL methods, FedAvg and Hybrid-FL protocol. The results indicate that FedAux shows higher accuracy (10.89%) and faster convergence rate compared with other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call