Abstract
Federated learning (FL) enables a great deal of distributed independent participants to collaborate in training without sharing data. Malicious adversary can poison the local model by backdoor poisoning attacks and utilize the characteristic that server cannot trace original data to make the poisoned model directly aggregated. Especially in the AIoT-FL network that generates large amounts of data in real-time, such an attack is more powerful. In this paper, we design a sybil-based backdoor poisoning attacks (SBPA) against the above vulnerability. Malicious participants inject backdoor triggers into distributed Big Data to covertly complete data poisoning. During subsequent iterative aggregation, the joint model activates the backdoor during testing to achieve misclassification for the backdoor images. Besides, malicious participants create some sybil nodes to join the aggregation by taking advantage of the vulnerability of system devices to be easily disconnected. They are committed to making the poisoned local models aggregated with higher probability. Their goal works on making the final global model misclassify backdoor images while keeping high classification accuracy on the other non-backdoor samples. We conduct extensive experiments on multiple datasets and exhibit more robust performance than the state-of-the-art in various metrics under both data distribution including i.i.d. and non-i.i.d. scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.