Abstract

Mobile edge computing (MEC) is a promising solution to support resource-constrained devices by offloading tasks to the edge servers. However, traditional approaches (e.g., linear programming and game-theory methods) for computation offloading mainly focus on the immediate performance, potentially leading to performance degradation in the long run. Recent breakthroughs regarding deep reinforcement learning (DRL) provide alternative methods, which focus on maximizing the cumulative reward. Nonetheless, there exists a large gap to deploy real DRL applications in MEC. This is because: 1) training a well-performed DRL agent typically requires data with large quantities and high diversity, and 2) DRL training is usually accompanied by huge costs caused by trial-and-error. To address this mismatch, we study the applications of DRL on the multi-user computation offloading problem from a more practical perspective. In particular, we propose a distributed and collective DRL algorithm called DC-DRL with several improvements: 1) a distributed and collective training scheme that assimilates knowledge from multiple MEC environments, which not only greatly increases data amount and diversity but also spreads the exploration costs, 2) an updating method called adaptive n-step learning, which can improve training efficiency without suffering from the high variance caused by distributed training, and 3) combining the advantages of deep neuroevolution and policy gradient to maximize the utilization of multiple environments and prevent the premature convergence. Lastly, evaluation results demonstrate the effectiveness of our proposed algorithm. Compared with the baselines, the exploration costs and final system costs are reduced by at least 43 and 9.4 percent, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call