Abstract

AbstractAs a distributed learning framework, Federated Learning (FL) allows different local learners/participants to collaboratively train a joint model without exposing their own local data, and offers a feasible solution to legally resolve data islands. However, among them, the data privacy and model security are two challenges. The former means that, if original data are used for trained FL models, various methods can be used to deduce the original data samples, thereby causing the leakage of data. The latter implies that unreliable/malicious participants may affect or destroy the joint FL model, through uploading wrong local model parameters. Therefore, this paper proposes a novel distributed FL training framework, namely LDP‐Fed+, which takes into account differential privacy protection and model security defense. Specifically, firstly, a local perturbation module is added at the local learner side, which perturbs the original data of local learners through feature extraction, binary encoding and decoding, and random response. Then, through using the perturbed data, local neural network model is trained to obtain the network parameters that meet local differential protection, to effectively deal with model inversion attacks. Secondly, a security defense module is added on the server side, which uses the auxiliary model and differential index mechanism to select an appropriate number of local disturbance parameters for aggregation to enhance model security defense and deal with membership inference attacks. The experimental results show that, compared with other federated learning models based on differential privacy, LDP‐Fed+ has stronger robustness for model security and higher accuracy for model training while ensuring strict privacy protection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call