Abstract

Mobile edge computing (MEC) is considered as an effective solution to delay-sensitive services, and computing offloading, the central technology in MEC, can expand the capacity of resource-constrained mobile terminals (MTs). However, because of the interdependency among applications, and the dynamically changing and complex nature of the MEC environment, offloading decision making turns out to be an NP-hard problem. In the present work, a graph mapping offloading model (GMOM) based on deep reinforcement learning (DRL) is proposed to address the offloading problem of dependent tasks in MEC. Specifically, the MT application is first modeled into a directed acyclic graph (DAG), which is called a DAG task. Then, the DAG task is transformed into a subtask sequence vector according to the predefined order of priorities to facilitate processing. Finally, the sequence vector is input into an encoding-decoding framework based on the attention mechanism to obtain the offloading strategy vector. The GMOM is trained using the advanced proximal policy optimization (PPO) algorithm to minimize the comprehensive cost function including delay and energy consumption. Experiments show that the proposed model has good decision-making performance, with verified effectiveness in convergence, delay, and energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call