Abstract

In recent times, Internet of Things (IoT) devices is gaining popularity in advanced wireless technology (i.e., 5G). However, in 5G applications (say in edge platform), the IoT devices have limited computation & processing capabilities which makes it challenging to execute Deep Neural Network (DNN) models on them. To address this, we introduce Split Computing technology, to partition DNN inference layers based on the computational capabilities (such as bandwidth, battery level and processing power, etc.) of IoT and edge (computationally powerful) devices, respectively. To validate split computing, we propose a framework called Distributed Artificial Intelligence (DAI) architecture. We use the architecture for a fitness application (use-case) where we detect the pose of a person for our proposed Quantized Split PoseNet DNN (QSP-DNN) algorithm which partitions the DNN layers among IoT device and edge based on Wi-Fi bandwidth. We perform experiments to validate the QSP-DNN algorithm using DAI architecture. The QSP-DNN with DAI compares split execution (computed among IoT device & edge) for partial offload and full-offload executed on edge device. The result shows that using QSP-DNN in DAI architecture provides split execution performing 20.76 % improvement compared to the full offload case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call