Abstract

Federated learning (FL) is a promising approach that allows many clients to jointly train a model without sharing the raw data. Due to the clients' different preferences, the class imbalance issue frequently occurs in real-world FL problems and poses threats for poisoning attacks to the existing FL methods. In this work, we first propose a new attack called Class Imbalance Attack that can degrade the testing accuracy of a particular class(es) to even 0 under the state-of-the-art robust FL methods. To defend against such attacks, we further propose a Class-Balanced FL method with a novel contribution-wise Byzantine-robust aggregation rule. In the designed rule, an honest score and a contribution score will be assigned to each client dynamically according to the server model. The server itself will be initiated with a small dataset, and a model (called server model) will be maintained. These two scores will be subsequently used to calculate the weighted average of the client gradients for each training iteration. The experiments are conducted on five datasets against state-of-the-art poisoning attacks, including the Class Imbalance Attack. The empirical results demonstrate the effectiveness of the proposed Class-Balanced FL method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call