Abstract

Cooperative multiaccess edge computing (MEC) is a promising paradigm for the next-generation mobile networks. However, when the number of users explodes, the computational complexity of the existing optimization or learning-based task placement approaches in the cooperative MEC can increase significantly, which leads to intolerable MEC decision-making delay. In this article, we propose a mean field game (MFG) guided deep reinforcement learning (DRL) approach for the task placement in the cooperative MEC, which can help servers make timely task placement decisions, and significantly reduce average service delay. Instead of applying MFG or DRL separately, we jointly leverage MFG and DRL for task placement, and let the equilibrium of MFG guide the learning directions of DRL. We also ensure that the MFG and DRL approaches are consistent with the same goal. Specifically, we novelly define a mean field guided $Q$ -value (MFG-Q), which is an estimation of the $Q$ -value with the Nash equilibrium gained by MFG. We evaluate the proposed method’s performance using real-world user distribution. Through extensive simulations, we show that the proposed scheme is effective in making timely decisions and reducing the average service delay. Besides, the convergence rates of our proposed method outperform the pure DR-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call