Abstract
With the explosive growth of massive data generated by smart Internet of Things (IoT) devices, federated learning has been envisioned as a promising technique to provide distributed machine learning services while protecting training data privacy. However, conventional federated learning protocols have shown significant drawbacks in regards of efficiency and scalability. First, since the synchronous communication model of federated learning and the computation capability of each device is different, the straggled users could severely desegregate the efficiency. Second, in synchronous communication, there is no effective client selection mechanism to make the model perform better in the early stage. Third, how to coordinate the communication of various nodes to accelerate global convergence is also one of the issues that need to be considered. To solve the above-mentioned problems, we propose a semi-asynchronous federated learning mechanism where a data expansion method is used to effectively reduce the stragglers existing in both synchronous and asynchronous communication models. Moreover, we also designed a priority function to make the accuracy increase rapidly in the early stage. Experimental results demonstrate that our proposed method have higher accuracy and faster convergence time compared with existing synchronization methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.