Abstract

Machine learning has been applied in a wide range of various fields. To train more effective control models, it is a trend for organizations holding their private data to collaborate with others, which raises privacy problems. Federated learning allows multiple participants to train learning models collaboratively without sharing their private training data but only the gradient. Nonetheless, recent researches show that sharing the gradient can also cause the original data leakage. To eliminate the data leakage of federated learning, in this paper, we propose a secure multiparty federated learning control system, including a secure training process and a secure prediction process. In the training process, data providers train the learning model collaboratively without disclosing local data. The trained model can be verified by participants. The data providers can then provide users with prediction services based on the trained model. In the prediction process, data providers cannot access the user data, and users cannot obtain the model. Also, the prediction result can be verified by users. We carry out comprehensive experiments to demonstrate the effectiveness of our proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call