Abstract

By extending computation capacity to the edge of wireless networks, edge computing has the potential to enable computation-intensive and delay-sensitive applications in 5G and beyond via computation offloading. However, in multi-user heterogeneous networks, it is challenging to capture complete network information, such as wireless channel state, available bandwidth or computation resources. The strong couplings among devices on application requirements or radio access mode make it more difficult to design an optimal computation offloading scheme. Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information. In this paper, we utilize DRL to design an optimal computation offloading and resource allocation strategy for minimizing system energy consumption. We first present a multi-user edge computing framework in heterogeneous networks. Then, we formulate the joint computation offloading and resource allocation problem as a DRL form and propose a new DRL-inspired algorithm to minimize system energy consumption. Numerical results based on a realworld dataset demonstrate demonstrate the effectiveness of our proposed algorithm, compared to two benchmark solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call