Abstract

With the rapid development of Internet of Things (IoT), data generated by IoT devices are also increasing exponentially. The edge computing has alleviated the problems of limited network and transmission delay when processing tasks of IoT devices in traditional cloud computing. And with the popularity of deep learning, more and more terminal devices are embedded with AI (Artificial Intelligence) processors for higher processing capability at the edge. However, the problems of Deep Learning task offloading in heterogeneous edge computing environment have not been fully investigated. In this paper, a multi-model edge computing offloading framework is proposed, using NVIDIA Jetson edge devices (Jetson TX2, Jetson Xavier NX, and Jetson Nano) and GeForce RTX GPU servers (RTX3080 and RTX2080) to simulate the edge computing environment, and make binary computational offloading decisions for face detection tasks. We also introduce a Bayesian Optimization algorithm, namely MTPE (Modified Tree-structured Parzen Estimator), to reduce the total cost of edge computation within a time slot including response time and energy consumption, and ensure the accuracy requirements of face detection. In addition, we employ the Lyapunov model to obtain the harvesting energy between time slots to keep the energy queue stable. Experiments reveal that MTPE algorithm can achieve the globally optimal solution in fewer iterations. The total cost of multi-model edge computing framework is reduced by an average of 17.94% compared to a single-model framework. In contrast to the Double Deep Q-Network (DDQN), our proposed algorithm can decrease the computational consumption by 23.01% for obtaining the offloading decision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call