Abstract

In the scenario of big data analytics, operating large-scale machine learning applications often resort to distributed processing and parallel computing, where handling the collaboration between edge nodes, especially in the heterogeneous environment has become a promising research direction for both algorithm design and system implementation. This chapter elaborates an efficient and scalable Tiny ML platform, which is well compatible with the heterogeneous environment and fully exploits the capacity of edge devices when conducting machine learning applications. To achieve this goal, one critical question is how to build a high-performance architecture for large-scale edge learning systems.This chapter summarizes the existing parallelism mechanisms for Tiny ML system. As an emerging distributed training framework, the Federated Learning (FL) aims at collaboratively training multiple ML models among different participants without sharing their raw data during the whole training process. This chapter also conduct a practice on FL implementation. Following the steps in this practice, the readers can easily construct a FL training platform, which would be helpful to understand the concept of FL.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.