Abstract

Split learning is a widely recognized distributed learning framework suitable for joint training scenarios with limited computing resources. However, recent research indicates that the malicious server can achieve high-quality reconstruction of the client’s data through feature space hijacking attacks, leading to severe privacy leakage concerns. In this paper, we further enhance this attack to enable efficient data reconstruction while maintaining acceptable performance on the main task. Another significant advantage of our attack framework lies in its ability to fool the state-of-the-art attack detection mechanism, thus minimizing the risk of attacker exposure and making sustainable attacks possible. Moreover, we adaptively refine and adjust the attack strategy, extending the data reconstruction attack for the first time to the more challenging scenario of vertically partitioned data in split learning. In addition, we introduce three training modes for the attack framework, allowing the attacker to choose according to their requirements freely. Finally, we conduct extensive experiments on three datasets and evaluate the attack performance of attack frameworks in different scenarios, parameter settings, and defense mechanisms. The results demonstrate our attack framework’s effectiveness, invisibility, and generality. Our research comprehensively highlights the potential privacy risks associated with split learning and sounds the alarm for secure applications of split learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call