Abstract

Mobile Edge Computing (MEC) is a promising computing paradigm in the context of 5G networks, as it enables the migration of workloads from User Equipments (UEs) to nearby MEC servers, thereby providing additional computing resources to UEs. In this paper, we propose a joint optimization approach to offloading decisions and resource allocation in a multi-user and multi-server MEC system, which operates in a time-varying environment. Our objective is to minimize the average task latency and discard rate under the constraints of latency and limited computing resources of the server. While traditional optimization methods have been used to solve computational offloading problems in static environments, these methods are not suitable for time-varying systems. Deep reinforcement learning can be an effective method for solving optimization problems in time-varying environments, as it can be used to adjust strategies in real-time in response to changes in the environment. However, the increased number of UEs in the system leads to a combinatorial increase in the number of possible actions, making it difficult for the algorithm to learn. To address this issue, we propose a multi-branch network based Deep Q Network (DQN) algorithm called Branch Deep Q Network (BDQN), which modifies the action generation network into a multi-branch network structure, each branch generates one-dimensional action. This modification makes the number of network outputs increase linearly. Numerical results show that the BDQN algorithm outperforms other baseline algorithms in terms of average task latency and discard rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call