Abstract

Vehicular edge computing has emerged as a promising paradigm by offloading computation-intensive latency-sensitive tasks to mobile-edge computing (MEC) servers. However, it is difficult to provide users with excellent quality-of-service (QoS) by relying only on these server resources. Therefore, in this paper, we propose to formulate the computation offloading policy based on deep reinforcement learning (DRL) in a vehicle-assisted vehicular edge computing network (VAEN) where idle resources of vehicles are deemed as edge resources. Specifically, each task is represented by a directed acyclic graph (DAG) and offloaded to edge nodes according to our proposed subtask scheduling priority algorithm. Further, we formalize the computation offloading problem under the constraints of candidate service vehicles models, which aims to minimize the long-term system cost including delay and energy consumption. To this end, we propose a distributed computation offloading algorithm based on multiagent DRL (DCOM), where an improved actor-critic network (IACN) is devised to extract features, and a joint mechanism of prioritized experience replay and adaptive n-step learning (JMPA) is proposed to enhance learning efficiency. The numerical simulations demonstrate that, in VAEN scenario, DCOM achieves significant decrements in the latency and energy consumption compared with other advanced benchmark algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call