Abstract
Federated learning (FL) is widely applied to healthcare systems with the primary aim of keeping the privacy of patient's data while improving classification quality by using knowledge from multiple participants. However, the training images are believed to be embedded into the shared gradient, which indicates a privacy risk when sharing the gradient with other participants in FL. Therefore, this work aims to design and evaluate an image recovery attack on medical images. More specifically, dummy images are trained to match the dummy gradient to the shared gradient while maintaining the smoothness and naturalness of reconstructed images. On the adversary side, an optimization problem is formulated with variables of dummy images and network parameters treated as constants. We evaluate the gradient attack on two medical datasets and reconstructed images clearly show the details of chest X-ray and MRI images including bone and blood vessels of captured areas. Our work aims to increase the awareness of people on sharing the gradient in FL, especially in healthcare systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.