Abstract

Federated Learning (FL) is a distributed machine learning paradigm that ensures data do not leave local devices. Data sharing problems can be addressed by FL in untrusted environments, e.g., the Internet of Vehicles (IoV). However, FL needs to frequently exchange massive parameters to achieve preset model goals. In addition, the change in bandwidths and the delay of data communications due to user mobility challenge the synchronization of model parameters. In this paper, an Efficient Hierarchical Asynchronous Federated Learning (EHAFL) algorithm is proposed to adjust the encoding length dynamically according to the bandwidth and reduce the communication cost substantially. A dynamic hierarchical asynchronous aggregation mechanism is proposed leveraging gradient sparsification and asynchronous aggregation techniques to further reduce the communication costs and improve the aggregation efficiency of the global model. Simulation results on MNIST and real-world datasets show that our proposed solution can reduce the communication costs by 98% while only compromising the model accuracy by 1%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call