Abstract

The shuffle model is a promising federated learning method, which has the advantages of the high accuracy of central federated learning and the high security of local federated learning. Although the shuffle model can well address the privacy security and model accuracy issues in federated learning, the existing research on shuffle models mainly has the following problems: First, the layers in machine learning models have different ranges of weights, and only some weights are important. Perturbing all weights equally will discard the importance of some weights and affect the accuracy of the model. Second, due to the perturbation of the weights of all clients, the dimension of model aggregation will increase, and the privacy budget will surge. In this paper, an adaptive top-k differential privacy federated shuffle model is proposed to address the above-mentioned issues, which dynamically adjusts the size of top-k iteratively, enabling the client and the shuffler to control the number of weight parameters that add perturbations, ensuring the importance of weights and reducing the privacy budget under the high-dimensional model. It is found when top-k comes to extremes, the security of the model decreases. To overcome this challenge, this paper proposes a double perturbation mechanism of top-k and non-topk with subsampling, which improves the security of the model and further reduces the privacy budget. Under the three data distributions of independent and identical distribution, non-independent and identical distribution, and non-independent and different distribution, experiments are conducted on three datasets, including MNIST, Fashion-MNIST, and CIFAR-10. The experimental results show that the model proposed in this paper achieves good performance and efficiency for privacy protection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call