Abstract

Federated Learning (FL), as an emerging form of distributed machine learning (ML), can protect participants’ private data from being substantially disclosed to cyber adversaries. It has potential uses in many large-scale, data-rich environments, such as the Internet of Things (IoT), Industrial IoT, Social Media (SM), and the emerging SM 3.0. However, federated learning is susceptible to some forms of data leakage through model inversion attacks. Such attacks occur through the analysis of participants’ uploaded model updates. Model inversion attacks can reveal private data and potentially undermine some critical reasons for employing federated learning paradigms. This article proposes novel differential privacy (DP)-based deep federated learning framework. We theoretically prove that our framework can fulfill DP’s requirements under distinct privacy levels by appropriately adjusting scaled variances of Gaussian noise. We then develop a Differentially Private Data-Level Perturbation (DP-DLP) mechanism to conceal any single data point’s impact on the training phase. Experiments on real-world datasets, specifically the social media 3.0, Iris, and Human Activity Recognition (HAR) datasets, demonstrate that the proposed mechanism can offer high privacy, enhanced utility, and elevated efficiency. Consequently, it simplifies the development of various DP-based FL models with different tradeoff preferences on data utility and privacy levels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call