Abstract

Although federated learning does not require any participant to share its private data to the central server, local data information still has a potential risk of being revealed taking advantage of the uploaded model parameters or gradients from each client, if the server is honest-but-curious. To address this issue, homomorphic encryption is one of the main stream privacy preservation technologies used in federated learning due to its extraordinary model protection ability. However, most encryption-based federated learning systems are computationally intensive and consume a large number of communication resources. Moreover, a trusted third party is always needed to generate key pairs for both encryption and decryption operations, not only increasing the complexity of the system topology but also causing additional security threats. Therefore, in this chapter, we introduce two secure federated learning frameworks. The first one is for horizontal federated learning, where the global key pairs are jointly generated between the server and connected clients without the help of a trusted third party. In addition, model quantization and approximated model aggregation techniques are adopted to significantly improve the encryption efficiency during the training period. The second framework is tailored for vertical federated learning, where labels are distributed on different local devices. To achieve secure node splitting and construction as well as label aggregation of a XGBoost tree, both homomorphic encryption and differential privacy are adopted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call