Abstract

In edge-enabled Internet of Things (IoT), computation offloading service is expected to offer users with better Quality of Experience (QoE) than traditional IoT. Unfortunately, the growing multiple tasks from users are occuring with the emergence of the IoT environment. Meanwhile, the current computation offloading with QoE is solved by deep reinforcement learning (DRL) with the issue of instability and slow convergence. Therefore, improving the QoE in edge-enabled IoT is still the ultimate challenge. In this article, to enhance the QoE, we propose a new QoE model to study the computation offloading. Specifically, the emerged QoE model can capture three influential elements: 1) service latency determined by local computing latency and transmission latency; 2) energy consumption according to local calculation and transmission consumption; and 3) task success rate based on the coding error probability. Moreover, we improve the deep deterministic policy gradients (DDPG) algorithm and propose a algorithm named the double-dueling-deterministic policy gradients (D3PG) based on the proposed model. Specifically, the actor network highly relies on the critic network, which makes the performance of the DDPG sensitive to the critic and thus leads to poor stability and slow convergence in the computation offloading process. To solve this, we redesign the critic network by using Double $Q$ -learning and Dueling networks. Extensive experiments verify the better stability and faster convergence of our proposed algorithm than existing methods. In addition, experiments also indicate that our proposed algorithm can improve the QoE performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call