Abstract

Data decentralization and privacy constraints in federated learning systems withhold user data from the server. As a result, intruders can take advantage of this privacy feature by corrupting the federated network using forged updates obtained on malicious data. This paper proposes a defense mechanism based on adversarial training and label noise analysis to address this problem. To do so, we design a generative adversarial scheme for vaccinating local models by injecting them with artificially-made label noise that resembles backdoor and label flipping attacks. From the perspective of label noise analysis, all poisoned labels can be generated through three different mechanisms. We demonstrate how backdoor and label flipping attacks resemble each of these noise mechanisms and consider them all in the introduced design. In addition, we propose devising noisy-label classifiers for the client models. The combination of these two mechanisms enables the model to learn possible noise distributions, which eliminates the effect of corrupted updates generated due to malicious activities. Moreover, this work conducts a comparative study on state-of-the-art deep noisy label classifiers. The designed framework and selected methods are evaluated for intrusion detection on two internet of things networks. The results indicate the effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call