Abstract
As AI applications become increasingly integrated into daily life, protecting user privacy while enabling collaborative model training has become a crucial challenge, especially in decentralized edge computing environments. Traditional federated learning (FL) approaches, which rely on centralized model aggregation, struggle in such settings due to bandwidth limitations, data heterogeneity, and varying device capabilities among edge nodes. To address these issues, we propose PearFL, a decentralized FL framework that enhances collaboration and model generalization by introducing prototype exchange mechanisms. PearFL allows each client to share lightweight prototype information with its neighbors, minimizing communication overhead and improving model consistency across distributed devices. Experimental evaluations on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100, demonstrate that PearFL achieves superior communication efficiency, convergence speed, and accuracy compared to conventional FL methods. These results confirm PearFL’s efficacy as a scalable solution for decentralized learning in heterogeneous and resource-constrained environments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have