Abstract

Federated learning (FL) is widely applied to healthcare systems with the primary aim of keeping the privacy of patient's data while improving classification quality by using knowledge from multiple participants. However, the training images are believed to be embedded into the shared gradient, which indicates a privacy risk when sharing the gradient with other participants in FL. Therefore, this work aims to design and evaluate an image recovery attack on medical images. More specifically, dummy images are trained to match the dummy gradient to the shared gradient while maintaining the smoothness and naturalness of reconstructed images. On the adversary side, an optimization problem is formulated with variables of dummy images and network parameters treated as constants. We evaluate the gradient attack on two medical datasets and reconstructed images clearly show the details of chest X-ray and MRI images including bone and blood vessels of captured areas. Our work aims to increase the awareness of people on sharing the gradient in FL, especially in healthcare systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call