Abstract

The computing power of various Internet of Things (IoT) devices is quite different. To enable IoT devices with lower computing power to perform machine learning, all nodes can only train smaller models, which results in the waste of computing power for high-performance devices. In this article, a heterogeneous model fusion federated learning (HFL) mechanism is proposed. Each node trains learning models of different scales according to its own computing capabilities. After receiving the gradient trained by each node, the parameter server (PS) corrects the received gradient with the repeat matrix, and then update the corresponding region of the global model according to the mapping matrix. After all update operations are over, the PS assigns the compressed model to the corresponding node. This article uses a variety of experimental schemes to evaluate the proposed method, including three data sets, two model structures, and three computational complexity levels. The proposed method have been proved it not only maximizes the use of the unbalanced computing power of edge nodes, but also enables different structural models to compensate for the shortcomings of others, improving the overall performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.