Abstract

Federated learning provides a solution for data privacy protection, while enabling training over the local data samples, without exchanging them. However, it is far from practical and secure because data privacy is still vulnerable due to the well-studied attacks, e.g., membership inference attacks and model inversion attacks. In this paper, to further prevent data leakage against these attacks, we propose FL-PATE, a differentially private federated learning framework with knowledge transfer. Specifically, participants with sensitive data are grouped to train teacher models under federated learning settings, and the knowledge of teacher models is transferred to a publicly accessible student model for prediction via aggregating teacher models' outputs of public datasets. A modified client-level differential privacy mechanism is used to guarantee each participant's data privacy during the corresponding teacher model's training process. The proposed framework preserves participant's privacy against membership inference attacks and the differential privacy cost is fixed. The privacy analysis and experiments demonstrate that trained teacher and student models have an excellent performance in accuracy and robustness theoretically and empirically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call