Abstract

This paper investigates a computation resource optimization problem of mobile edge computing (MEC)-aided Internet-of-Things (IoT) devices with a reinforcement learning (RL) solution. Specifically, we leverage the stochastic optimization method and formulate the Lyapunov optimization technique to maximize the long-term energy efficiency, taking into account the transmission power, network stability, and transmission latency. Based on the Markov decision process and model-free deep RL (DRL) approach, we propose a double DRL-based online computation offloading method to implement a deep neural network that learns from interactions to solve the computation offloading and transmission latency problem in the dynamic MEC-aided IoT environments. Furthermore, we design an adaptive method for continuous action-state spaces to minimize the completion time and total energy consumption of the IoT devices for stochastic computation offloading tasks. The proposed real-time Lyapunov optimization and DRL algorithms achieve low computational complexity and optimal processing time. Simulation results demonstrate that the proposed algorithm can achieve near-optimal control performance with enhanced energy efficiency performance compared to the baseline policy control algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call