Abstract

Federated learning (FL) has brought significant advantages to applications where collaborative learning should occur at multiple participating devices to enhance user experience in specific tasks. However, FL results in privacy leakage when n-1 clients collude to infer the model of another client. In this paper, we not only implement an FL framework but propose a methodology for preventing privacy leakage while realizing machine learning-based automatic hand-written digit recognition. Our framework supports the FL of deep networks where models trained locally are averaged. Two machine learning models Convolutional Neural Network (CNN) and Multilayer Perceptron (MLP) are implemented with FL. We proposed an algorithm, Federated Averaging with Privacy Leakage Prevention (FA-PLP), for model averaging to be done by the server. Our algorithm exploits differential privacy (DP) for realizing model averaging while getting rid of chances of privacy leakage. We evaluated our framework with two distributions of the MNIST dataset. Our empirical results revealed that FA-PLP with the CNN model could achieve the highest accuracy of 95.38%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call