Abstract

Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper.

Highlights

  • We developed a privacy protected machine learning algorithm, named Paillier Federated Multi-Layer Perceptron (PFMLP), based on homomorphic encryption

  • There is some work focusing on federated transfer learning, such as the framework designed in [36], that can be flexibly applied to various secure multi-party machine learning

  • The experiments on the MNIST dataset show that the model trained by PFMLP can reach an accuracy rate of 0.9252 on the testing set, while the model trained by MLP using all of the data of training set can reach an accuracy rate of 0.9245, just 0.007 lower than that of the PFMLP algorithm

Read more

Summary

Introduction

In the big data era, data privacy has become one of the most significant issues. far, there exist plenty of security strategies and encryption algorithms which try to ensure that sensitive data would not be compromised. We developed a privacy protected machine learning algorithm, named PFMLP, based on homomorphic encryption. The multi-party privacy protected machine learning based on homomorphic encryption proposed in the paper has a wide range of scenarios in practical applications. The main contributions of the work are as follows: It provides a multi-party privacy protected machine learning framework that combines homomorphic encryption and federated learning to achieve the protection of data and model security during model training. The proposed framework can maintain the privacy data security when multiple parties learn together. It verifies that the model trained by our proposed algorithm has a similar accuracy rate as the model trained by traditional methods.

Distributed Machine Learning
Secure Multi-Party Computation and Homomorphic Encryption
Federated Learning
Multi-Sample Cooperative Learning Based on a Federated Idea
Federated Network Algorithm
Learning Client
Computing Server
Federated Multi-Layer Perceptron Algorithm
1: Initialize model parameters θ
Paillier Federated Network
Paillier Algorithm
Improved Paillier Algorithm
Architecture of the Paillier Federated Network
2: Initialize the model parameters θ
Algorithm Security Analysis
Experimental Datasets and Environment
Accuracy Comparison
D Fatigue
Comparison of Model Training Time for Different Key Lengths
Comparison of Training Performance with Different Sizes of Hidden Layers
Different Numbers of Learning Clients on Training Accuracy and Time Overhead
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.