Abstract

Currently, huge amounts of data are produced by edge device. Considering the heavy burden of network bandwidth and the service delay requirements of delay-sensitive applications, processing the data at network edge is a great choice. However, edge devices such as smart wearables, connected and autonomous vehicles usually have several limitations on computational capacity and energy which will influence the quality of service. As an effective and efficient strategy, offloading is widely used to address this issue. But when facing device heterogeneity problem and task complexity increase, service quality degradation and resource utility decrease often occur due to unreasonable task distribution. Since conventional simplex offloading strategies show limited performance in complex environment, we are motivated to design a dynamic regional resource scheduling framework which is able to work effectively taking different indexes into consideration. Thus, in this article we first propose a double offloading framework to simulate the offloading process in real edge scenario which consists of different edge servers and devices. Then we formulate the offloading as a Markov Decision Process (MDP) and utilize a deep reinforcement learning (DRL) algorithm named asynchronous advantage actor-critic (A3C) as the offloading decision making strategy to balance the workload of edge servers and finally reduce the overhead in terms of energy and time. Comparison experiments for local computing and wide-used DRL algorithm DQN are conducted in a comprehensive benchmark and the results show that our work performs much better on self-adjusting and overhead reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call