Abstract

Supporting latency-sensitive and computation-intensive applications is hardly possible for mobile devices (MDs) with limited battery capacity and low computing resources. Therefore, mobile edge computing (MEC) and wireless power transfer (WPT) have emerged as promising technologies that allow MDs to offload part or all of their workloads to MEC servers and harvest energy for prolonging their battery lifetime. However, the MEC server’s limited computing resources, available communication channel quality, and time-limited energy harvesting (EH) challenge the computation offloading. In this paper, we study the joint problem of decentralized computation offloading and resource allocation (JDCORA) in the environment of non-orthogonal multiple access (NOMA)-assisted MEC with multiple EH-enabled MDs. To learn decentralized offloading policies, we propose a multi-agent deep reinforcement learning (MADRL)-based scheme towards minimizing energy consumption and task completion time, which considers the cooperation between MDs to adjust their strategies. In particular, we improve the multi-agent deep deterministic policy gradient (MADDPG) by applying the features of double actors, double centralized critics, soft value estimation, critic regularization and proportional-based prioritized experience replay (pPER) and propose an algorithm called multi-agent twin actors regularized critics (MATARC). Simulation results demonstrate that the proposed MATARC has a better convergence performance compared to other baseline methods and also reduces the average energy consumption, task completion time and task drop rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call