Abstract

AbstractFederated learning is one of the cutting-edge research area in machine learning in era of big data. In federated learning, multiple clients rely on an untrusted server for model training in a distributed environment. Instead of sending local data directly to the server, the client achieves the effect of traditional centralized learning by sharing optimized parameters that represent the local model. However, the data used to train the model by the client holds private information of individuals. A potential adversary can steal the clients’ model parameters by corrupting the server, then recover clients’ local training data or reconstruct their local models. In order to solve the aforementioned problems, we construct an efficient federated learning framework based on multi-key homomorphic encryption, which can effectively restrict the adversary from accessing the clients’ model. In this framework, using homomorphic encryption ensures that all operations, including the server-side aggregation process, are secure and do not reveal any private information about the training data. At the same time, we consider multi-key scenario, where each client does not need to share the same public key and private key, but each of them has its own public-private key pair. It is convenient for the client to join the model update or be offline at any time, which greatly increases the flexibility and scalability of the system. Security and efficiency analysis indicates that the proposed framework is secure and efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call