Abstract

Vertical federated learning (VFL) revolutionizes privacy-preserved collaboration for small businesses, that have distinct but complementary feature sets. However, as the scope of VFL expands, the constant entering and leaving of participants, as well as the subsequent exercise of the “right to be forgotten” pose a great challenge in practice. The question of how to efficiently erase one’s contribution from the shared model remains largely unexplored in the context of vertical federated learning. In this paper, we introduce a vertical federated unlearning framework, which integrates model checkpointing techniques with a hybrid, first-order optimization technique. The core concept is to reduce backpropagation time and improve convergence/generalization by combining the advantages of the existing optimizers. We provide in-depth theoretical analysis and time complexity to illustrate the effectiveness of the proposed design. We conduct extensive experiments on 6 public datasets and demonstrate that our method could achieve up to 6.3 × speed-up compared to the baseline, with negligible influence on the original learning task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call