Abstract

SummaryIn recent years, data are typically distributed in multiple organizations while the data security is becoming increasingly important. Federated learning (FL), which enables multiple parties to collaboratively train a model without exchanging the raw data, has attracted more and more attention. Based on the distribution of data, FL can be realized in three scenarios, that is, horizontal, vertical, and hybrid. In this article, we propose to combine distributed machine learning techniques with vertical FL and propose a distributed vertical federated learning (DVFL) approach. The DVFL approach exploits a fully distributed architecture within each party in order to accelerate the training process. In addition, we exploit homomorphic encryption to protect the data against honest‐but‐curious participants. We conduct extensive experimentation in a large‐scale cluster environment and a cloud environment in order to show the efficiency and scalability of our proposed approach. The experiments demonstrate the good scalability of our approach and the significant efficiency advantage (up to 6.8 times with a single server and 15.1 times with multiple servers in terms of the training time) compared with baseline frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call