Abstract

Mobile edge computing (MEC) has the potential to enable computation-intensive applications in 5G networks. MEC can extend the computational capacity at the edge of wireless networks by migrating the computation-intensive tasks to the MEC server. In this paper, we consider a multi-user MEC system, where multiple user equipments (UEs) can perform computation offloading via wireless channels to an MEC server. We formulate the sum cost of delay and energy consumptions for all UEs as our optimization objective. In order to minimize the sum cost of the considered MEC system, we jointly optimize the offloading decision and computational resource allocation. However, it is challenging to obtain an optimal policy in such a dynamic system. Besides immediate reward, Reinforcement Learning (RL) also takes a long-term goal into consideration, which is very important to a time-variant dynamic systems, such as our considered multi-user wireless MEC system. To this end, we propose RL-based optimization framework to tackle the resource allocation in wireless MEC. Specifically, the Q-learning based and Deep Reinforcement Learning (DRL) based schemes are proposed, respectively. Simulation results show that the proposed scheme achieves significant reduction on the sum cost compared to other baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call