Abstract

Multi-access Edge Computing (MEC) is promising to handle computation-intensive and latency-sensitive applications for 5G and beyond. Users can benefit from task offloading via wireless channels to MEC servers deployed at the nearby network edge. However, the radio resource is scarce and the computing resource in MEC is limited as compared to the remote cloud. Upon making an offloading decision, it is also important to efficiently allocate radio resource and MEC computing resource to ensure better service for the upload tasks. In this paper, we target the long-term delay and energy consumption performance in a multi-user system, and design an online solution based on Deep Reinforcement Learning (DRL) to deal with time-varying user requests and wireless channel conditions. To obtain better convergence property, we propose a new Actor-Critic model, called Discrete And Continuous Actor-Critic (DAC), to jointly optimize the continuous actions (i.e., radio resource allocation and computing resource allocation) and the discrete action (i.e., offloading decisions), and train the model iteratively with a weighted loss function. Our simulation results show that DAC outperforms existing solutions based on DDPG, DQN, and others, in terms of convergence speed, delay, and energy performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.