Abstract

To address privacy concerns, federated learning (FL) is becoming a promising machine learning technique which enables multiple decentralized clients to train a shared model collaboratively while preserving their private training data. Although FL may reduce the risks of data leak, it is still possible for hackers to reverse-engineer a trained model and figure out the information in the original training dataset provided by a FL client. In order to avoid such risks, secure aggregation (SA) can be used to privately combine the trained models of the clients to update the shared model. However, SA usually introduces performance overhead as it requires additional computation for encryption operations and even communications when secure multi-party computation (SMPC) is used. In this paper, we analyze the performance of FL with SA using PySyft, an open source framework including FL implementation, and propose an asynchronous FL mechanism to improve the overall performance. It turns out that the performance depends on the computational capabilities of the clients and the characteristics of the communication network, and we propose a performance modeling method to help system designers break down the execution time and decide on suitable trade-offs between privacy, efficiency, and accuracy for a balanced system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call