Abstract

The widely deployed devices in Internet of Things (IoT) have opened up a large amount of IoT data. Recently, federated learning emerges as a promising solution aiming to protect user privacy on IoT devices by training a globally shared model. However, the devices in the complex IoT environments pose great challenge to federate learning, which is vulnerable to gradient-based reconstruction attacks. In this paper, we discuss the relationships between the security of federated learning model and optimization technologies of decreasing communication overhead comprehensively. To promote the efficiency and security, we propose a defence strategy of federated learning which is suitable to resource-constrained IoT devices. The adaptive communication strategy is to adjust the frequency and parameter compression by analysing the training loss to ensure the security of the model. The experiments show the efficiency of our proposed method to decrease communication overhead, while preventing privacy data leakage.

Highlights

  • In recent years, Internet of ings (IoT) has had great popularity in different aspects of modern life and a huge amount of IoT services are emerging

  • Deep Leakage from Gradients (DLG) works, we find that it could be affected by a number of factors that affect the quality of the images generated by federated learning efficiency

  • In order to improve the security of the federated learning model and reduce the effect on the quality of the global model, we propose an adaptive frequency-compression federated learning (AFC-FL) by adjusting the communication frequency and parameter compression. e weights of the two factors are adjusted to ensure the accuracy of federated learning adaptively, while providing higher security. is calls for AFC-FL to start from a larger frequency and minimal compression and adjust them gradually as the model reaches closer to convergence

Read more

Summary

Introduction

Internet of ings (IoT) has had great popularity in different aspects of modern life and a huge amount of IoT services are emerging. It provides the privacy of clients to keep their original data training on their own devices, while jointly learn a global model by sharing only local parameters with the server. E baseline communication protocol is used in many early federated learning implementations: the client sends a full vector of local training parameter update back to the federated learning server in each round. In this context, the current research is focused on how to reduce the transfer cost of model parameters to make it more efficient in terms of communication, of which the gradient compression and periodic methods are intensively researched. The compression message M is sent to other nodes

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call