Abstract
Most of the existing FL systems focus on a data-parallel architecture where training data are partitioned by samples among several parties. In some real-life applications, however, partitioning by features is also of practical relevance and the number of features is usually unbalanced among parties. The corresponding learning framework is referred to as Vertical Federated Learning (VFL). Though some pioneering work focused on VFL, the convergence properties of VFL on unbalanced features, especially when parties conduct different numbers of local updates concerning heterogeneous computational capabilities are still unknown. In this article, we propose a new learning framework to improve the training efficiency of VFL on unbalanced features. Given the number of features and the computational capability owned by each party, our thorough theoretical analysis exhibits that the number of local updates conducted by each party has a great effect on the convergence rate and the computational complexity, both of which jointly determine the overall training efficiency in an interrelated and sophisticated way. Based on our theoretical findings, we formulate an optimization problem and derive the optimal solution by selecting an adaptive number of local training rounds for each party. Extensive experiments on various datasets and models demonstrate that our approach significantly improves the training efficiency of VFL.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Parallel and Distributed Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.