Abstract

This paper presents a federated learning approach based on utilizing computational resources of the IoT edge devices for training deep neural networks. In this approach, the edge devices and the cloud server collaborate in the training phase while preserving the privacy of the edge device data. Owing to the limited computational power and resources available to the edge devices, instead of the original neural network (NN), we suggest to use a smaller NN generated using a proposed heuristic method. In the proposed approach, the smaller model, which is trained on the edge device, is generated from the main NN model. By the exploiting Knowledge Distillation (KD) approach, the learned knowledge in the server and the edge devices can be exchanged, leading to lower required computation on the server and preserving data privacy of the edge devices. Also, to reduce the knowledge transfer overhead on the communication links between the server and the edge devices, a method for selecting the most valuable data to transfer the knowledge is introduced. The effectiveness of this method is assessed by comparing it to state-of-the-art methods. The results show that the proposed method lowers the communication traffic by up to 250 × and increases the learning accuracy by an average of 8.9 % in the cloud compared to the prior KD-based distributed training approaches in CIFAR-10 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call