Abstract

With the development of the Internet of Things, edge computing applications are paying more and more attention to privacy and real-time. Federated learning, a promising machine learning method that can protect user privacy, has begun to be widely studied. However, traditional synchronous federated learning methods are easily affected by stragglers, and non-independent and identically distributed data sets will also reduce the convergence speed. In this paper, we propose an asynchronous federated learning method, STAFL, where users can upload their updates at any time and the server will immediately aggregate the updates and return the latest global model. Secondly, STAFL will judge the user’s data distribution according to the user’s update and dynamically change the aggregation parameters according to the user’s network weight and staleness to minimize the impact of non-independent and identically distributed data sets on asynchronous updates. The experimental results show that our method performs better on non-independent and identically distributed data sets than existing methods.

Highlights

  • Mobile phones, wearable devices, and autonomous vehicles are just a few of the modern distributed networks that are generating a wealth of data each day

  • We believe that our method is better than existing methods in federated learning based on asynchronous communication

  • The server can use the information stored in a model list to determine whether enough information about a certain data distribution has been aggregated in the global model so that it can better impose corresponding penalties or rewards on arriving updates

Read more

Summary

Introduction

Wearable devices, and autonomous vehicles are just a few of the modern distributed networks that are generating a wealth of data each day. Privacy concerns over transmitting raw data require user-generated data to remain on local devices This has led to a growing interest in federated learning, which explores training statistical models directly on remote devices. Most of the existing asynchronous algorithms are semi-asynchronous strategies These methods allow s to upload updates independently; they still need to synchronize their information. Another challenge of federated learning is the heterogeneity of data. The server immediately aggregates the update and delivers the latest global model to the user when receiving the update information, thereby reducing the waste of computing resources;. We use the weight divergence of the local model to group users and maintain a list of users’ updated information on the server-side.

Related Work
System Design
Staleness Tolerant Model
Weight Divergence Discussion
Aggregation Parameter Settings
Model List Update and Weight Divergence Computation
Reduce Communication Overhead
Experiment Evaluations
Data Settings
Experimental Results of Model List and Weight Divergence
Model List Length Discussion
Comparison of STAFL with Other Methods
Communication Cost Comparison
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.