Abstract

Most of the existing FL systems focus on a data-parallel architecture where training data are partitioned by samples among several parties. In some real-life applications, however, partitioning by features is also of practical relevance and the number of features is usually unbalanced among parties. The corresponding learning framework is referred to as Vertical Federated Learning (VFL). Though some pioneering work focused on VFL, the convergence properties of VFL on unbalanced features, especially when parties conduct different numbers of local updates concerning heterogeneous computational capabilities are still unknown. In this article, we propose a new learning framework to improve the training efficiency of VFL on unbalanced features. Given the number of features and the computational capability owned by each party, our thorough theoretical analysis exhibits that the number of local updates conducted by each party has a great effect on the convergence rate and the computational complexity, both of which jointly determine the overall training efficiency in an interrelated and sophisticated way. Based on our theoretical findings, we formulate an optimization problem and derive the optimal solution by selecting an adaptive number of local training rounds for each party. Extensive experiments on various datasets and models demonstrate that our approach significantly improves the training efficiency of VFL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call