Abstract
Future network services are emerging with an inevitable need for high wireless capacity along with strong computational capabilities, stringent latency and reduced energy consumption Two technologies are promising, showing potential to support these requirements: multi-access (or mobile) edge computing (MEC) and non-orthogonal multiple access (NOMA). While MEC allows users to access the abundant computing resources at the edge of the network, NOMA technology enables an increase in the density of a cellular network. However, integrating NOMA technology into MEC systems faces challenges in terms of joint offloading decisions (remote or local computation) and inter-user interference management. In this paper, with the objective of maximizing the system-wide sum computation rate under latency and energy consumption constraints, we propose a two-stage deep reinforcement learning algorithm to solve the joint problem in a multicarrier NOMA-based MEC system, in which the first-stage agent handles offloading decisions while the second-stage agent considers the offloading decisions to determine the resource block assignments for users. Simulation results show that compared with other benchmark algorithms, the proposed algorithm improves the sum computation rate while meeting the latency and energy consumption requirements, and it outperforms the approach in which a single agent handles both offloading decisions and resource block assignments due to faster convergence performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have