Abstract

Federated learning (FL) as a privacy-preserving machine learning (ML) algorithm provides an efficient distributed training paradigm. Existing FL frameworks still suffer from privacy leakage hazards such as membership inference attacks. The current popular defense approaches are mainly based on differential privacy (DP) strategy. However, privacy preservation is undertaken with an inevitable loss of model utility in DP. As a result, it performs miserably in practical deployments. To solve this problem, we modify the FL framework through the information bottleneck (IB) method to attain a trade-off between privacy protection and model utility. Firstly, we adapt the training process on client side by applying IB in the local training. It is intended to squeeze out privacy through the bottleneck. Secondly, we further modify the training process on server side. A validation process is used to evaluate whether the IB-based local training is squeezing out privacy. Clients that extrude the right information will occupy an important place in aggregation phase. Extensive experiments on classic datasets demonstrate the superiority of the proposed scheme in terms of privacy preservation and model utility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call