Abstract

Mobile edge computing (MEC) is a promising solution to enable resource-limited mobile devices to offload computation-intensive tasks to nearby edge servers. In this paper, dynamic computation offloading in a non-orthogonal multiple access (NOMA) based multi-user network is investigated by considering stochastic task arrivals and wireless channels. Specifically, a cooperative multi-agent deep reinforcement learning (CMADRL) based framework is proposed to learn decentralized offloading policies to allocate task offloading and local execution powers at each user, which minimizes long-term average network computation cost in terms of power consumption and buffering delay. By leveraging a centralized training and distributed execution strategy, the proposed framework can not only learn efficient decentralized policies, but also relieves users’ computational pressure and works well in coordinating the interference of NOMA-based networks. To reduce the training complexity, it is further improved to train one parameter shared policy network that can be exploited by all users with slightly sacrificed performance. Numerical simulations demonstrate that the proposed CMADRL based framework can learn efficient dynamic offloading policies at each user, which significantly outperforms the conventional independent Q-learning based framework as well as some greedy strategies with lower network computation cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.