Abstract

In this letter, we consider the latency minimization problem in NOMA-MEC networks. Each user offloads partial tasks to the MEC server for remote execution and processes the remaining tasks locally. An iterative two-user NOMA scheme is proposed for task offloading. Obviously, users’ tasks partition ratios and offloading power impose a great effect on system performance, which can be further optimized based on the deep deterministic policy gradient (DDPG) method. Specifically, we derive an upper bound for users’ offloading power, and all the power variables can be normalized to fall between zero and one. In this way, the ratio and power variables lie in the same range, which can be obtained via a single DDPG actor network. Moreover, the optimization objective and constraints are both incorporated into the design of the reward function, which assists DDPG to learn desired strategies. Simulation results show that the proposed algorithm can effectively reduce the task processing latency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call