Abstract
The server in federated learning maintains a global model by aggregating local updates from trusted clients. Poisoning attacks against federated learning influence the global model by manipulating local updates. Existing works typically employ two strategies for the controlled clients: local data modification or local update manipulation. In this paper, we propose DUPS: Data Poisoning attacks with Uncertain Sample selection that does not directly alter the data or local update of the controlled clients. The main concept is to sample from rather than synthesize by or alter the original data for use in training poisoning updates. First, samples with the target label of the controlled clients are classified using the local model. Samples with the label that has the highest number of misclassifications are selected as uncertain ones. Second, these samples are extracted individually to train poisoned updates. Finally, all controlled clients upload these poisoned updates to the server. Experiments are carried out on five datasets in comparison to five state-of-the-art algorithms. Results show that the proposed attack can effectively improve the poisoning mission rate.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have