Abstract

The joint problem of task offloading, collaborative computing and resource allocation for Multi-access Edge Computing (MEC) is a challenging issue. In this paper, splitting computing tasks at MEC servers through collaboration among MEC servers and a cloud server, we investigate the joint problem of collaborative task offloading and resource allocation. A collaborative task offloading, computing resource allocation, and subcarrier and power allocation problem in MEC is formulated. The goal is to minimize the total energy consumption of the MEC system while satisfying a delay constraint. The formulated problem is a non-convex mixed-integer optimization problem. In order to solve the problem, we propose a Deep Reinforcement Learning (DRL) based bi-level optimization framework. The task offloading decision, computing collaboration decision, and power and subcarriers allocation subproblems are solved at the upper-level, while the computing resource allocation subproblem is solved at the lower-level. We combine Dueling-Deep-Q-Network (DQN), Double-DQN and add adaptive parameter space Noise (D <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$^{2}$</tex-math></inline-formula> N-DQN) to improve DRL performance in MEC. Simulation results demonstrate that the proposed algorithm achieves near-optimal performance in Energy Efficiency (EE) and task completion rate compared to other DRL-based approaches and other benchmark schemes under various network parameter settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call