Abstract
Distributed machine learning, such as federated learning, protects privacy by collecting gradients instead of training data. Recent studies have shown that gradient leakage attacks are possible in distributed machine learning, that is, the training data can be reconstructed from the shared gradients. In practical applications, distributed machine learning typically uses gradient compression to prevent gradient leakage attacks. This approach not only significantly reduces communication overhead but also maintains model performance. In this paper, we propose a method to reconstruct images from compressed gradients, called Deep Leakage from Compressed Gradients (DLCG). Extensive experiments on LeNet and ResNet20-4 demonstrate that our proposed method is able to reconstruct recognizable images from compressed gradients with sparsity levels as high as 90% and 80%, respectively, outperforming other methods. Furthermore, we analyze the sensitivity of gradient leakage attack to gradients of from different layers and propose a corresponding defense strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.