Abstract

Distributing workloads that cannot be handled by a single edge device across multiple edge devices is a promising solution that minimizes the inference latency of deep learning applications by exploiting model parallelism. Several prior solutions have been proposed to partition target models efficiently, but most studies have focused on finding the optimal fused layer configurations, which minimize the data-transfer overhead between layers. However, as recent deep learning network models have become more complex and the ability to deploy them quickly has become a key challenge, the search for the best fused layer configurations of target models has become a major requirement. To solve this problem, we propose a lightweight model partitioning framework called Legion to find the optimal fused layer configurations with minimal profiling execution trials. By finding the optimal configurations using cost matrix construction and wild card selection, the experimental results showed that Legion achieved a similar performance to the full configuration search at a fraction of the search time. Moreover, Legion performed effectively even on a group of heterogeneous target devices by introducing a per-device cost-related matrix construction. With three popular networks, Legion shows only 3.4% performance loss as compared to a full searching scheme (FSS), on various different device configurations consisting of up to six heterogeneous devices, and minimizes the profiling overhead by 48.7× on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call