Abstract

Federated learning is a distributed learning framework which trains global models by passing model parameters instead of raw data. However, the training mechanism for passing model parameters is still threatened by gradient inversion, inference attacks, etc. With a lightweight encryption overhead, function encryption is a viable secure aggregation technique in federation learning, which is often used in combination with differential privacy. The function encryption in federal learning still has the following problems: a) Traditional function encryption usually requires a trust third party (TTP) to assign the keys. If a TTP colludes with a server, the security aggregation mechanism can be compromised. b) When using differential privacy in combination with function encryption, the evaluation metrics of incentive mechanisms in the traditional federal learning become invisible. In this paper, we propose a hybrid privacy-preserving scheme for federated learning, called Fed-DFE. Specifically, we present a decentralized multi-client function encryption algorithm. It replaces the TTP in traditional function encryption with an interactive key generation algorithm, avoiding the problem of collusion. Then, an embedded incentive mechanism is designed for function encryption. It models the real parameters in federated learning and finds a balance between privacy preservation and model accuracy. Subsequently, we implemented a prototype of Fed-DFE and evaluated the performance of decentralized function encryption algorithm. The experimental results demonstrate the effectiveness and efficiency of our scheme.

Highlights

  • As a result of the rapid development of deep neural networks, data-driven artificial intelligence has been widely used in smart transportation [1,2], Internet of Things [3,4], smart grid [5,6] and financial applications [7,8]

  • We used experiments to evaluate the effectiveness of our chosen parameters in real-world federal learning. We hope that these works can serve as a reference for different game theories applied in federated learning, so as to find a balance between privacy protection and model accuracy

  • We separately describe the details of decentralized multi-client function encryption (DMCFE), local differential privacy, and incentive mechanism involved in Fed-DFE

Read more

Summary

Introduction

As a result of the rapid development of deep neural networks, data-driven artificial intelligence has been widely used in smart transportation [1,2], Internet of Things [3,4], smart grid [5,6] and financial applications [7,8]. Zhang et al [11] proposed a distributed selective stochastic gradient descent algorithm combined with Paillier homomorphic encryption In this scheme, a trusted third party (TTP) assigns keys to users and the server, and the server uses Paillier additive homomorphism to achieve secure gradient aggregation. Differential privacy is a privacy-preserving technique proven by rigorous mathematics It prevents malicious attackers from inferring user privacy in the training data by adding carefully designed noise to the model. We propose a hybrid privacy-preserving framework for federated learning called Fed-DFE It combines function encryption with differential privacy to prevent gradient leakage. We present an embedded incentive mechanism for function encryption It models and evaluates the real parameters in federated learning that affect the quality of model services, and uses the evaluation results as criteria for selecting participants.

Related Work
Cryptography-Based Secure Aggregation Algorithm in Federated Learning
Incentive Mechanism in Federated Learning
Decentralized Multi-Client Functional Encryption
Differential Privacy
Motivation and Our Basic Ideas
System Architecture
DMCFE for Fed-DFE
Local Differential Privacy for Fed-DFE
Incentives Mechanism for Fed-DFE
Security Analysis of Fed-DFE
Privacy Analysis of Fed-DFE
System Setup
Performance of the DMCFE
The Impacts of Privacy Budget
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.