Abstract

The distributed nature of federated learning (FL) renders the learning process susceptible to model poisoning attacks, whereby local workers in FL report fabricated and false local training outcomes to the FL server with the intention to compromise/degrade the global model, or even derail the global learning process so that it can no longer converge. Existing defense mechanisms typically consider the falsified local updates as outliers that reside far away from the mean update, and attempt to counter against such model poisoning attacks by detecting and eliminating those outliers in the reported local training updates. These methods do not perform well when the data of different workers are non-I.I.D. and/or when there are multiple colluding attackers, under which an outlier update is not always a falsified update, and vice versa. In this paper, we propose a novel defense mechanism, MinVar, to counter the model poisoning attacks in FL from a drastically different perspective. Instead of detecting and eliminating outlier local updates from the global model aggregation, MinVar takes all local updates but assigns different weights to them in the global model aggregation. MinVar decides the optimal weights by formulating and solving an optimization problem in each iteration of the learning process, which aims to suppress the contribution of those falsified (i.e., malicious) updates while still retaining the contribution of those honest/truthful updates. Based on the sparsity observation in most deep neural networks, a data sampling technique is further proposed to reduce the computation complexity of MinVar while preserving its defense performance. Extensive experiments are conducted on both the MNIST and CIFAR-10 datasets, and the results verify the effectiveness of the proposed MinVar defense model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call