Abstract

Federated learning (FL) faces many security threats. Although multiple robust FL frameworks have been proposed to defend against these malicious attacks in horizontal federated learning (HFL), security issues in vertical federated learning (VFL) have not been adequately studied. Recent studies show that VFL is vulnerable to inference attacks (e.g., label inference attacks), which puts VFL at risk. To solve this problem, we propose a new VFL framework SVFL (Secure Vertical Federated Learning) to defend against privacy breaches inspired by feature disentanglement. Specifically, in SVFL, the bottom models are feature extractors to extract samples’ features in the high-dimensional space, and the top model sews samples’ features of the same sample ID. Then, disentangling the samples’ features into the class-relevant feature and class-irrelevant one via two classifiers: one is to recognize the class-relevant feature by regular training, and another is to recognize the class-irrelevant feature by adversarial training. Our experiments show that SVFL not only defends against label inference attacks, no matter how many samples features a malicious participant occupies, but also improves the global model’s accuracy. Therefore, SVFL provides a privacy security guarantee for the vertical federated learning system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.