Abstract

Public administration frequently deals with geographically scattered personal data between multiple government locations and organizations. As digital technologies advance, public administration is increasingly relying on collaborative intelligence while protecting individual privacy. In this context, federated learning has become known as a potential technique to train machine learning models on private and distributed data while maintaining data privacy. This work looks at the trade-off between privacy assurances and vulnerability to membership inference attacks in differential private federated learning in the context of public administration applications. Real-world data from collaborating organizations, concretely, the payroll data from the Ministry of Education and the public opinion survey data from Asia Foundation in Afghanistan, were used to evaluate the effectiveness of noise injection, a typical defense strategy against membership inference attacks, at different noise levels. The investigation focused on the impact of noise on model performance and selected privacy metrics applicable to public administration data. The findings highlight the importance of a balanced compromise between data privacy and model utility because excessive noise can reduce the accuracy of the model. They also highlight the need for careful consideration of noise levels in differential private federated learning for public administration tasks to provide a well-calibrated balance between data privacy and model utility, contributing toward transparent government practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call