Abstract

Federated Learning (FL) is a distributed machine learning technique for training machine learning models across multiple clients collaboratively. It allows multiple local devices to cooperatively train global models without compromising data privacy or necessitating extensive data transfer. However, the inherent data heterogeneity across clients poses a challenge, as a single global model trained through FL struggles to adapt to the diverse distributions present in individual client data. This discrepancy leads to a marked decline in model accuracy, slows down FL convergence, and even makes FL divergent. To this end, this paper proposes a method termed Residual Attention for Federated Learning (RAFL), aimed at personalized federated learning (PFL) by applying a residual multi-head attention mechanism to enrich personalized feature information. Additionally, it leverages a global category embedding layer to learn global feature information. In order to evaluate our proposed method's effectiveness, we perform extensive experiments on three benchmark datasets, comparing it with eight benchmark methods. The experimental results demonstrate that RAFL has better personalized learning capability compared to the other benchmark methods. On three benchmark datasets, RAFL achieves the highest accuracy. Notably, the accuracy is about two and a half percentage points higher than the current sota PFL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call