Abstract

This work studies cooperative inference of deep neural networks (DNNs), in which the inference process is performed in a cooperative manner by an edge device and an edge server. In particular, a practical noisy wireless channel between the edge device and the edge server is considered in this work, unlike the previous works which only considered ideal and simplistic error-free communications between them. To prevent the prediction of DNNs from being inaccurate, the communication errors caused by the noisy wireless channel must be appropriately mitigated. Thus, in the proposed cooperative DNN inference, a hybrid automatic repeat request with chase combining (HARQ-CC) is adopted with a practical error correction code (ECC). Analyzing the end-to-end latency of the proposed cooperative DNN inference, we jointly determine the optimal code rate of the ECC and the optimal location at which the DNN must be split into two parts for the edge device and the edge server to minimize the end-to-end latency. The experimental results show that the proposed cooperative DNN inference considerably outperforms other comparable schemes in previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call