Abstract

Federated learning is an emerging paradigm that provides privacy-preserving collaboration among multiple participants for model training without sharing private data. The participants with heterogeneous devices and networking resources decelerate the training and aggregation. The dataset of the participant also possesses a high level of variability, which means the characteristics of the dataset change over time. Moreover, it is a prerequisite to preserve the personalized characteristics of the local dataset on each participant device to achieve better performance. This article proposes a model personalization-based federated learning approach in the presence of variability in the local datasets. The approach involves participants with heterogeneous devices and networking resources. The central server initiates the approach and constructs a base model that executes on most participants. The approach simultaneously learns the personalized model and handles the variability in the datasets. We propose a knowledge distillation-based early-halting approach for devices where the base model does not fit directly. The early halting speeds up the training of the model. We also propose an aperiodic global update approach that helps participants to share their updated parameters aperiodically with server. Finally, we perform a real-world study to evaluate the performance of the approach and compare with state-of-the-art techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call