Abstract

Computation offloading, as an efficient and promising computing paradigm for mobile edge computing (MEC), provides optimal computation allocation between MEC servers and Internet of Things (IoT) devices. However, in modern wireless communications, most of the computational tasks are latency-sensitive, presenting an efficiency challenge for offloading services. Moreover, to make the best use of computation resources, it remains an issue to ensure stable optimal offloading for the allocation of computation resources. In this paper, a novel adaptive deep reinforcement learning-based offloading (ADRLO) framework is developed to achieve a stable optimal computation revenue with low execution latency for MEC offloading. Structurally, we implement a residual connected deep neural network (DNN) to generate a series of offloading strategies and use them to obtain the global system computation rate. Especially, the strategy that achieves the maximum computation rate is then selected as experience to support the DNN training, forming a closed-loop deep reinforcement learning (DRL) mechanism. To further enhance the efficiency and stability of the MEC offloading, an adaptive regulator is presented for the framework parameters including the size of action space and the DNN learning rate. Numerical results show that, compared to existing state-of-the-art approaches, our proposal achieves optimal computation revenue, exhibits better convergence performance, and significantly reduces the execution latency by almost 50%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call