Abstract

Federated learning allows resource-constrained edge computing devices to cooperatively train machine learning models while keeping data locally. However, federated learning faces the challenge that the global model slowly converges and even deviates from the optimal solution under heterogeneous data. To solve this problem, this paper proposes an adaptive personalized federated learning (APFL) algorithm,considering the federation optimization problem for heterogeneous data under a multi-task learning framework that includes spatial and temporal dimensions.First, a parameter decomposition strategy is adoptedto decompose the model parameters into globally shared parameters and client-specific parameters to achieve model personalization for each client while extracting general knowledge for all clients. Then, APFL models the local optimization as sequential multi-task learning performed on each client. An elastic weight consolidation penalty term is imposed on the update of the globally shared parameters to realize the memory retention of important parameters and the fast learning of unimportant parameters for the globally shared model.Comparative experiments on multiple federated benchmark datasets verify the effectiveness and superiority of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call