Abstract

Artificial Intelligence (AI) has been applied to solve various challenges of real-world problems in recent years. However, the emergence of new AI technologies has brought several problems, especially with regard to communication efficiency, security threats and privacy violations. Towards this end, Federated Learning (FL) has received widespread attention due to its ability to facilitate the collaborative training of local learning models without compromising the privacy of data. However, recent studies have shown that FL still consumes considerable amounts of communication resources. These communication resources are vital for updating the learning models. In addition, the privacy of data could still be compromised once sharing the parameters of the local learning models in order to update the global model. Towards this end, we propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in FL. In order to implement FedOpt, we design a novel compression algorithm, namely, Sparse Compression Algorithm (SCA) for efficient communication, and then integrate the additively homomorphic encryption with differential privacy to prevent data from being leaked. Thus, the proposed FedOpt smoothly trade-offs communication efficiency and privacy preservation in order to adopt the learning task. The experimental results demonstrate that FedOpt outperforms the state-of-the-art FL approaches. In particular, we consider three different evaluation criteria; model accuracy, communication efficiency and computation overhead. Then, we compare the proposed FedOpt with the baseline configurations and the state-of-the-art approaches, i.e., Federated Averaging (FedAvg) and the paillier-encryption based privacy-preserving deep learning (PPDL) on all these three evaluation criteria. The experimental results show that FedOpt is able to converge within fewer training epochs and a smaller privacy budget.

Highlights

  • Artificial Intelligence (AI) has been employed in a plethora of application fields in recent years [1].In this context, as a notable branch of AI, Deep Learning (DL) has been broadly used to empower plenty of data-driven real-world applications, such as facial recognition, autonomous driving and smart grid systems [2,3,4]

  • We propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in Federated Learning (FL)

  • These privacy threats can be mitigated through distributing the local training among multiple edge-devices, which has led to the emergence of Federated Learning (FL) [6]

Read more

Summary

Introduction

Artificial Intelligence (AI) has been employed in a plethora of application fields in recent years [1]. In the third step, all the users upload the parameters of their locally trained models to the server, where they are aggregated to generate a new global model These three steps are continuously repeated until the desired convergence level is achieved. In specific, following the above described FL protocol, each user has to communicate its full gradient update during each epoch This update is normally the same size as the fully trained model, where the trained model could be in the size of gigabytes based on the DL architecture and its millions of parameters [9]. To the best of our knowledge, none of the existing approaches supports communication efficiency and privacy preservation in FL at the same time [12] To this end, in this paper, we propose a novel approach, namely, Federated Optimisation (FedOpt), based on Distributed Stochastic Gradient Descent (DSGD) optimisation.

System Model
Problem Statement
Federated Learning
Additively Homomorphic Encryption
Differential Privacy
Laplace Mechanism
Gradient Aggregation in FedOpt
Efficiency and Privacy in FedOpt
Encryption Phase
Aggregation Phase
Decryption Phase
FedOpt Evaluation
Accuracy Test
Communication Efficiency
Computation Overhead
Related Work and Discussions
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call