Abstract
Training of machine learning models in a Datacen-ter, with data originated from edge nodes, incurs high communication overheads and violates a user's privacy. These challenges may be tackled by employing Federated Learning (FL) machine learning technique to train a model across multiple decentralized edge devices (workers) using local data. In this paper, we explore an approach that identifies the most representative updates made by workers and those are only uploaded to the central server for reducing network communication costs. Based on this idea, we propose a FL model that can mitigate communication overheads via clustering analysis of the worker local updates. The Cluster Analysis-based Federated Learning (CA-FL) model is studied and evaluated in human activity recognition (HAR) datasets. Our evaluation results show the robustness of CA - FL in comparison with traditional FL in terms of accuracy and communication costs on both IID and non-IID cases.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have