Abstract

Federated Learning (FL) has been widely used in various fields such as financial risk control, e-government and smart healthcare. To protect data privacy, many privacy-preserving FL approaches have been designed and implemented in various scenarios. However, existing works incur high communication burdens on clients, and affect the training model accuracy due to non-Independently and Identically Distributed (non-IID) data samples separately owned by clients. To solve these issues, in this paper we propose a secure Model-Contrastive Federated Learning with improved Compressive Sensing (MCFL-CS) scheme, motivated by contrastive learning. We combine model-contrastive loss and cross-entropy loss to design the local network architecture of our scheme, which can alleviate the impact of data heterogeneity on model accuracy. Then we utilize improved compressive sensing and local differential privacy to reduce communication costs and prevent clients’ privacy leakage. The formal security analysis shows that our scheme satisfies (ε,δ)-differential privacy. And extensive experiments using five benchmark datasets demonstrate that our scheme improves the model accuracy by 3.45% on average of all datasets under the non-IID setting and reduces the communication costs by more than 95%, when compared with FedAvg.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call