Abstract

Machine Learning as a Service (MLaaS) has been widely used, such as logistic regression models trained in the cloud to provide people with classification predictions. However, these services allow the cloud to collect data from a large number of users, which could raise privacy concerns. With sharing of local models rather than user data, Federated Learning (FL) alleviates data privacy. But, in FL setting the difference between the predictive value provided by the optimal model and the shared global model (called maturing model) that approximates it is not significant, which still raises a security risk. In this paper, we design a flexible and privacy-preserving FL system to tackle these issues. This system makes the training process divide into two stages by analyzing whether the metrics of the iterative global gradient reach a set threshold or not. The two stages are implemented with Re-Encryption and CKKS homomorphism, which not only enables a secure aggregation in the cloud, but also maintains the security of the maturing model. Through detailed analysis, we prove the security of our protocol. Moreover, quantitative comparisons in the final experiments demonstrates that our scheme can achieve trade-off between accuracy, training time cost and communication overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call