Abstract

Distributed ML at the network edge is a promising paradigm that can preserve both network bandwidth and privacy of data providers. However, heterogeneity and limited computation and communication resources on edge servers (or edges) pose great challenges on distributed ML and formulate a new paradigm of edge (i.e., edge-cloud collaborative machine learning). In this article, we propose a novel framework of learning to learn for effective EL on heterogeneous edges with resource constraints. We first model the dynamic determination of collaboration strategy (i.e., the allocation of local iterations at edge servers and global aggregations on the cloud during the collaborative process) as an online optimization problem to achieve the trade-off between the performance of EL and the resource consumption of edge servers. Then we propose an OL4EL framework based on the budget-limited multi-armed bandit model. OL4EL supports both synchronous and asynchronous patterns, and can be used for both supervised and unsupervised tasks. To evaluate the performance of OL4EL, we conducted both real-world testbed experiments and extensive simulations based on Docker containers, where both support vector machine and K-means were considered as use cases. Experimental results demonstrate that OL4EL significantly outperforms state-of-the-art EL and other collaborative ML approaches in terms of the trade-off between performance and resource consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call