Abstract

Cloudlet-based vehicular networks are a promising paradigm to enhance computation services through a distributed computation method, where the vehicle edge computing (VEC) cloudlet are deployed in the vicinity of the vehicle. In order to further improve the computing efficiency and reduce the task processing delay, we present a parallel task scheduling strategy based on the multi-agent deep reinforcement learning (DRL) approach for delay-optimal VEC in vehicular networks, where multiple computation tasks select the target threads in a VEC server to execute the computing tasks. We model the target thread decision of computation tasks as a multi-agent reinforcement learning problem, which is further solved by using a task scheduling algorithm based on multi-agent DRL that is implemented in a distributed manner. The computation tasks, with each selection acting on the target thread acting as an agent, collectively interact with the VEC environment and receive observations with respect to a common reward and learn to reduce the task processing delay by updating the multi-agent deep Q network (MADQN) using the obtained experiences. The experimental results show that the proposed DRL-based scheduling algorithm can achieve significant performance improvement, reducing the task processing delay by 40% and increasing the processing probability of success for computation tasks by more than 30% compared with the traditional task scheduling algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call