Abstract

Federated learning can achieve distributed machine learning without sharing privacy and sensitive data of end devices. However, high concurrent access to cloud servers increases the transmission delay of model updates. Some local models may be unnecessary with an opposite gradient from the global model, thus incurring many additional communication costs. Existing work mainly focuses on reducing communication rounds or cleaning local defect data, and neither takes into account latency associated with high server concurrency. To this end, we study an edge-based communication optimization framework to reduce the number of end devices directly connected to the parameter server while avoiding uploading unnecessary local updates. Specifically, we cluster devices in the same network location and deploy mobile edge nodes in different network locations to serve as hubs for cloud and end devices communications, thereby avoiding the latency associated with high server concurrency. Meanwhile, we propose a method based on cosine similarity to filter out unnecessary models, thus avoiding unnecessary communication. Experimental results show that compared with traditional federated learning, the proposed scheme reduces the number of local updates by 60%, and the convergence speed of the evaluated model increases by 10.3%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.