Abstract

In federated learning (FL), poisoning attack invades the whole system by manipulating the client data, tampering with the training target, and performing any desired behaviors. Until now, numerous poisoning attacks have been carefully studied, however they are still practically challenged in real-world scenarios from two aspects: (i) multiple malicious client selections -poisoning attacks are only successfully launched when the malicious client has been chosen in enough epochs (i.e., more than half of the epochs); (ii) long-term poisoning training -the poisoning training usually needs much more epochs than the normal training (i.e., 3 times longer than normal training), both are unavailable in real cases. To address these overlooked problems, we propose a Poisoning Enhanced attack (PoE) against FLs, which is a general poisoning reinforcement framework. It is designed to transfer partially predicted probabilities of the source class to the target one. Thus, the inter-class distance between the source class and the target class is narrowed down in feature space for easier attacks. Towards this goal, the attack client uses label smoothing to change the model prediction distribution, dragging the global model in the direction that is favorable for poisoning. Extensive experiments show that PoE can significantly enhance the attack success rate ( × 8.4 on average) in practical FLs with normal training epochs. It also achieves state-of-the-art adaptive attack performance against defensive FLs (i.e., robust aggregations). The code of PoE could be downloaded at https://github.com/Leon022/poisoningenhancement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call