Abstract

Federated Learning (FL) allows multiple nodes without actually sharing data with other confidential nodes to retrain a common model. This is particularly relevant in healthcare applications, where data such as medical records are private and confidential. Although federated learning avoids the exchange of actual data, it still remains possible to fight protection on parameter values revealed in the training process or on a generated Machine Learning (ML) model. This study examines FL’s privacy and security concerns and deals with several issues related to privacy protection and safety when developing FL systems. In addition, we have detailed simulation results to illustrate the problems under discussion and potential solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call