Abstract

Due to the privacy breach risks and data aggregation of traditional centralized machine learning (ML) approaches, applications, data and computing power are being pushed from centralized data centers to network edge nodes. Federated Learning (FL) is an emerging privacy-preserving distributed ML paradigm suitable for edge network applications, which is able to address the above two issues of traditional ML. However, the current FL methods cannot flexibly deal with the challenges of model personalization and communication overhead in the network applications. Inspired by the mixture of global and local models, we proposed a Communication-Efficient Personalized Federated Meta-Learning algorithm to obtain a novel personalized model by introducing the personalization parameter. We can improve model accuracy and accelerate its convergence by adjusting the size of the personalized parameter. Further, the local model to be uploaded is transformed into the latent space through autoencoder, thereby reducing the amount of communication data, and further reducing communication overhead. And local and task-global differential privacy are applied to provide privacy protection for model generation. Simulation experiments demonstrate that our method can obtain better personalized models at a lower communication overhead for edge network applications, while compared with several other algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call