Abstract

Federated Learning (FL), emerging as a distributed machine learning, is a popular paradigm that allows multiple users to collaboratively train a intermediate model by exchanging local models without the training data leaving each user’s domain. However, FL still suffer from privacy risk such as leaking private information from users’ uploaded local models. To address the privacy concern, several approaches have been proposed to achieve privacy-preserving FL (PPFL) based on differential privacy (DP), multi-party computation (MPC), homomorphic encryption (HE) and functional encryption (FE). Compared with DP, MPC and HE, approaches based on FE is more advantageous and thus becomes the focus of this work. Moreover, all existing PPFL schemes via FE employ a multi-user extension of FE for a specific function, i.e., multi-input FE (MIFE). In this paper, we point out that existing FE-based PPFL schemes have faced with several security issues due to the misuse of MIFE. After reconsidering the security requirements of PPFL, we propose new goals of designing PPFL using FE. To achieve our goals, we propose a new FE called dual-mode decentralized multi-client FE (2DMCFE) and give a concreate construction for 2DMCFE. With 2DMCFE, we propose a new framework of PPFL where we establish a fresh 2DMCFE instance for each subset of users. Security proof shows the strong security of our framework under the semi-honest security setting. Furthermore, experiments conducted on real dataset demonstrate that our framework achieves comparable model accuracy and training efficiency to the basic FE-based scheme while providing stronger security guarantee.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call