Abstract

Multi-access edge computing (MEC) is an important enabling technology for 5G and 6G networks. With MEC, mobile devices can offload their computationally heavy tasks to a nearby server which can be a simple node at a base station, a vehicle or another device. With the increasing number of devices, slices and multiple radio access technologies, the problem of task offloading is becoming an increasingly complex problem. Thus, traditional approaches experience limitations while machine learning algorithms emerge as promising methods. In this paper, we consider binary and partial offloading problems and aim to jointly find optimal decisions for offloading and resource allocation which maximize the number of computed bits while minimizing the energy consumption. This allows improved usage of uplink transmit power and local CPU resources. We propose the Deep Reinforcement Learning for Joint Resource Allocation and Offloading (DJROM) algorithm that uses the double deep Q-network approach and models UEs as agents. We compare the proposed approach with two other machine learning based techniques, namely, multi-agent deep Q-learning (MARL-DQL) and multi-agent deep Q network (MARL-DQN) under fixed and mobile scenarios. Our results show that, DJROM scheme enhances the efficiency better than the other compared algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call