Abstract
Vertical federated learning (VFL) is a privacy-preserving distribution learning paradigm that enables participants, owning different features of the same sample space to train a machine learning model collaboratively while retaining their data locally. This paradigm facilitates improved efficiency and security for participants such as financial or medical fields, making VFL an essential component of data-driven Artificial Intelligence systems. Nevertheless, the partitioned structure of VFL can be exploited by adversaries to inject a backdoor, enabling them to manipulate the VFL predictions. In this paper, we aim to investigate the vulnerability of VFL in the context of binary classification tasks. To this end, we define a threat model for backdoor attacks in VFL and introduce a universal adversarial backdoor (UAB) attack to poison the predictions of VFL. The UAB attack, consisting of universal trigger generation and clean-label backdoor injection, is incorporated during the VFL training at specific iterations. This is achieved by alternately optimizing VFL sub-problems' universal trigger and model parameters. Our work distinguishes itself from existing studies on designing backdoor attacks for VFL, as those require the knowledge of auxiliary information that is not accessible within the split VFL architecture. In contrast, our approach does not require additional data to execute the attack. On the real-world datasets, our approach surpasses existing state-of-the-art methods, achieving up to 100% backdoor task performance while maintaining the main task performance. Our results in this paper make a major advance in revealing the hidden backdoor risks of VFL, hence paving the way for the future development of secure VFL. Our results in this paper make a major advance in revealing the hidden backdoor risks of VFL, hence paving the way for the future development of secure VFL applications such as finance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.