Abstract
Federated learning (FL) is a collaborative machine learning technique to train a global model (GM) without obtaining clients' private data. The main challenges in FL are statistical diversity among clients, limited computing capability among clients' equipment, and the excessive communication overhead between the server and clients. To address these challenges, we propose a novel sparse personalized FL scheme via maximizing correlation (FedMac). By incorporating an approximated l1 -norm and the correlation between client models and GM into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with nonsparse FL. Convergence analysis shows that the sparse constraints in FedMac do not affect the convergence rate of the GM, and theoretical results show that FedMac can achieve good sparse personalization, which is better than the personalized methods based on the l2 -norm. Experimentally, we demonstrate the benefits of this sparse personalization architecture compared with the state-of-the-art personalization methods (e.g., FedMac, respectively, achieves 98.95%, 99.37%, 90.90%, 89.06%, and 73.52% accuracy on the MNIST, FMNIST, CIFAR-100, Synthetic, and CINIC-10 datasets under non-independent and identically distributed (i.i.d.) variants).
Submitted Version (
Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have