Abstract

Federated learning is a distributed machine learning paradigm, where physically distributed computing nodes collaboratively train a global model. In federated learning, workers usually do not share training data with others, and thus some workers are malicious workers, who can change parameters (e.g., weights/gradients) in their models to degrade the global model’s training accuracy. This is generally called Byzantine attack. Existing solutions are either limited resistance to Byzantine attacks or not applicable to federated learning. In this paper, we propose ELITE, a robust parameter aggregation algorithm to defend federated learning from Byzantine attacks. Inspired by the observation that the parameters of malicious workers usually distract from the parameters of benign workers, we introduce entropy to efficiently detect malicious workers. We evaluate the performance of ELITE on image classification model training under three typical attacks, and experimental results show that ELITE can resist various Byzantine attacks and outperforms existing algorithms by improving the model accuracy at most up to 80%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call