Abstract

As an emerging computing paradigm, edge computing can assist user equipments (UEs) in executing computation-intensive deep neural network (DNN) inference tasks, thereby satisfying the stringent QoS requirement and relieving the burden of UEs. Due to the customizability of DNN models and limited capacity of the edge server, it is more realistic to upload DNN models on demand during end-to-edge co-inference, instead of deploying all DNN models at the edge server in advance. Existing works adopt the serial model uploading manner that uploads subsequent DNN layers only after antecedent DNN layers finish execution, inevitably prolonging the DNN execution latency. To this end, we innovatively design a parallel-efficient model uploading mechanism that allows subsequent DNN layers to be uploaded simultaneously when executing antecedent DNN layers, so as to efficiently mitigate the performance drop caused by model uploading. On this basis, we propose a Multi-UE Joint Optimization Algorithm based on Model Uploading (MJOA-MU) to optimize DNN partitioning and resource allocation for heterogeneous UEs. Specifically, MJOA-MU includes a Pruned Binary Tree based DNN Partitioning (PBT-DP) sub-algorithm to efficiently make the near-optimal partitioning decision for chain and non-chain models based on the long-term influence between DNN layers, and an Asynchronous Resource Allocation (ARA) sub-algorithm to allocate computation and communication resources for UEs by quantifying the inner- and inter-association, so as to match with individual demand and resource budget. Extensive simulation results demonstrate that MJOA-MU outperforms the state-of-the-art in terms of the DNN execution latency, and specifically achieves up to 64.5% reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call