Abstract

AbstractDistributed base station deployment, limited server resources and dynamically changing end users in mobile edge networks make the design of computing offloading schemes extremely challenging. Considering the advantages of deep reinforcement learning (DRL) in dealing with dynamic complex problems, this paper designs an optimal computing offloading and resource allocation strategy. Firstly, the authors consider a multi‐user mobile edge network scenario consisting of Macro‐cell Base Station (MBS), Small‐cell Base Station (SBS) and multiple terminal devices, the communication overhead and calculation overhead generated are formulated and described in detail. Besides, combined with the deterministic delay of tasks, the optimization objective of this paper is clarified to comprehensively consider system energy consumption. Then, a learning algorithm based on Deep Deterministic Policy Gradient (DDPG) is proposed to minimize system energy consumption. Finally, simulation experiments show that the authors’ proposed DDPG algorithm can effectively optimize the target value, and the total system energy consumption is only 15.6 J, which is better than other compared algorithms. It is also proved that the proposed algorithm has excellent communication resource allocation ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call