Abstract

By deploying computing units in edge servers, the device-generated computation-intensive tasks can be offloaded from the cloud, lessening the core network's traffic and reducing the tasks' completion latency. To mitigate the burden on edge server and improve user experience, this paper proposes a deep reinforcement learning (DRL)-based multiuser multitask hybrid computing offloading model for offloading a set of computation-intensive tasks generated by multiple users to edge server and adjacent devices. The proposed model makes global computing offloading decisions for multiple computation-intensive tasks simultaneously rather than via one-by-one decision-making, which takes the impact of users' offloading decisions on the system's overall performance in multitask offloading scenarios into account. The main goal of this study is to reduce the long-term overall system delay. The model uses the recurrent neural network to extract the feature information of task and network state, improving the convergence speed and stability of the DRL model. The experimental results demonstrate that the global offloading decision-making model outperforms other methods regarding long-term overall system delay and device energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call