Abstract

<p>In light of recent advancements in deep and machine learning, federated learning has been proposed as a means to prevent privacy invasion. However, a reconstruction attack that exploits gradients to leak learning data has recently been developed. With increasing research into federated learning and the importance of data usage, it is crucial to prepare for such attacks. Specifically, when face data are used in federated learning, the damage caused by privacy infringement can be significant. Therefore, attack studies are necessary to develop effective defense strategies against these attacks. In this study, we propose a new attack method that uses labels to achieve faster and more accurate reconstruction performance than previous reconstruction attacks. We demonstrate the effectiveness of our proposed method on the Yale Face Database B, MNIST, and CIFAR-10 datasets, as well as under non-IID conditions, similar to real federated learning. The results show that our proposed method outperforms random labeling in terms of reconstruction performance in all evaluations for MNIST and CIFAR-10 datasets in round 1.</p> <p> </p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call