Abstract

The ever-increasing number of Internet of Things (IoT) devices are continuously generating huge masses of data, but the current cloud-centric approach for IoT big data analysis has raised public concerns on both data privacy and network cost. Federated learning (FL) recently emerges as a promising technique to accommodate these concerns, by means of learning a global model by aggregating local updates from multiple devices without sharing the privacy-sensitive data. However, IoT devices usually have constrained computation resources and poor network connections, making it infeasible or very slow to train deep neural networks (DNNs) by following the FL pattern. To address this problem, we propose a new efficient FL framework called FL-PQSU in this paper. It is composed of 3-stage pipeline: structured pruning, weight quantization and selective updating, that work together to reduce the costs of computation, storage, and communication to accelerate the FL training process. We study FL-PQSU using popular DNN models (AlexNet, VGG16) and publicly available datasets (MNIST, CIFAR10), and demonstrate that it can well control the training overhead while still guaranteeing the learning performance.

Highlights

  • Over the past decades, Artificial Intelligence (AI) technology has made great strides in a variety of real-life applications, ranging from image recognition, video surveillance, speech synthesis, to machine translation

  • The development of these AI-based applications relies heavily on the knowledge embedded in big data, which are indispensable for training high-performance AI models such as the deep neural networks (DNNs)

  • It is even worse that the Internet of Things (IoT) devices are usually resource constrained in terms of computation capability and communication bandwidth, so the training time will be too long or even unbearable

Read more

Summary

INTRODUCTION

Artificial Intelligence (AI) technology has made great strides in a variety of real-life applications, ranging from image recognition, video surveillance, speech synthesis, to machine translation. It is preferable to decouple model training from the need for remotely collecting and centrally processing of the raw data In view of these shortcomings of the cloud-centric solution, researchers have recently proposed a new approach to train a shared global model on datasets decentrally located at a loose federation of participating devices (clients), termed as Federated Learning (FL) [3]. It is even worse that the IoT devices are usually resource constrained in terms of computation capability and communication bandwidth, so the training time will be too long or even unbearable This is not conducive to the rapid deployment and application of AI models in practice. We propose FL-PQSU, a new framework that enables efficient model training on resource-limited IoT devices, by incorporating overhead reduction techniques into the standard FL.

RELATED WORK
STANDARD FL PROCEDURE
FL-PQSU FRAMEWORK
STRUCTURED PRUNING
WEIGHT QUANTIZATION
SELECTIVE UPDATING
ALEXNET TRAINING ON MNIST
VGG16 TRAINING ON CIFAR10
Findings
CONCLUSION AND FUTURE WORK

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.