Abstract

Nowadays, there is an ever-increasing deployment of intelligent edge devices, such as smartphones, wearable devices, and autonomous vehicles. It is enabled by the integration of advanced sensors with higher computing capabilities and widespread internet availability. These edge devices generate a vast amount of data that can be utilized for better inference. However, due to privacy concerns, communication overhead, processing delay, and security issues, traditional machine learning (ML) algorithms face challenges that work in a centralized fashion where all the available data is accumulated beforehand. Federated learning (FL) is a new distributed on-device learning method that generates a global model through the collaboration of edge devices without compromising data privacy. In this paper, we propose a federated transfer learning (FTL) model considering clients’ heterogeneity in terms of their available computing resources and model architecture. We simulate the training performance of heterogeneous clients and observe that clients with sufficient resources require significantly lower computational time. In turn, the resource-constrained clients take notably higher computational time to accomplish a given task. Inspired by that, we design an FL model that constructs multiple global models based on the available resources of the clients and carries out a separate training process for each of the global models. We demonstrate the effectiveness of our proposed strategy by evaluating our FL model on CFAR100 dataset. Our findings show that training time differs significantly among the heterogeneous clients and assigning multiple global models can notably improve convergence time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call