Abstract

Federated learning is increasingly popular, as it allows us to circumvent challenges due to data islands, by training a global model using data from one or more data owners/sources. However, in edge computing, resource-constrained end devices are vulnerable to be compromised and abused to facilitate poisoning attacks. Privacy-preserving is another important property to consider when dealing with sensitive user data on end devices. Most existing approaches only consider either defending against poisoning attacks or supporting privacy, but not both properties simultaneously. In this paper, we propose a differentially private federated learning model against poisoning attacks, designed for edge computing deployment. First, we design a weight-based algorithm to perform anomaly detection on the parameters uploaded by end devices in edge nodes, which improves detection rate using only small-size validation datasets and minimizes the communication cost. Then, differential privacy technology is leveraged to protect the privacy of both data and model in an edge computing setting. We also evaluate and compare the detection performance in the presence of random and customized malicious end devices with the state-of-the-art, in terms of attack resiliency, communication and computation costs. Experimental results demonstrate that our scheme can achieve an optimal tradeoff between security, efficiency and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call