Abstract

Due to the rapid development of computer and communication technology, a large number of delay-sensitive and computation-intensive applications have appeared in the mobile Internet. Traditional cloud computing cannot handle such applications well. Mobile edge computing (MEC) based on distribution is considered to be an effective technology to solve this problem. However, the computation resources in edge base stations are limited, and computation offloading and resource allocation are the key factors affecting service efficiency and energy saving. In this paper, a Markov decision model is constructed for the adaptive computation offloading and resource allocation environment. Deep reinforcement learning (DRL) algorithms do not require prior knowledge, and can learn optimal strategies through training to solve dynamic decision-making problems, and the Deep Deterministic Policy Gradient (DDPG) algorithm based on DRL is used to solve the model to obtain computation offloading and resource allocation strategies that minimizes time delay and energy consumption. The simulation results show that, compared with other algorithms, the computational task completion rate of our proposed algorithm is greatly improved, and the time delay and energy consumption are effectively reduced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call