Federated learning (FL), which enables multiple distributed devices (clients) to collaboratively train a global model without transmitting their private data, has attracted much attention in the Internet of Things (IoT) domain. Compared with centralized learning, FL has obvious privacy advantages because it can protect the clients’ raw data from direct access by adversaries. Furthermore, to prevent the adversaries from inferring private information from the transmitted parameters, several FL algorithms based on differential privacy (DP) have been proposed, where the clients add artificial noise to their local parameters for privacy protection. Nevertheless, the added noise would disrupt the learning process and degrade the performance of the trained model. Considering this, in this article, we develop a performance-enhanced DP-based FL (PEDPFL) algorithm, where a classifier-perturbation regularization method is proposed to improve the robustness of the trained model against DP-injected noise. We derive the theoretical privacy and convergence analysis of the proposed algorithm, and also demonstrate the influence of some hyperparameters on the convergence performance. Simulation results on real-world data sets show that the proposed algorithm has better classification performance than the existing DP-based FL algorithms at the same level of privacy protection, and thus, it is more applicable to IoT applications.
Read full abstract