Abstract

To address the problem of limited computing power of vehicles, Vehicle Edge Computing (VEC) allows vehicles to schedule tasks to edge nodes with sufficient resources. In this paper, we propose a multi-agent reinforcement learning (MARL) approach to solve the multi-task scheduling problem in a dynamic VEC environment. First, we model the cooperative scheduling problem of dependent tasks in the VEC environment, considering the task priority and edge node load balancing in the task scheduling process. We define the optimization objective as minimizing the task processing delay and find it is an NP-hard problem. Then, we design a distributed algorithm SCMA based on MARL. The algorithm enables vehicles to find the optimal scheduling strategy by cooperating and sharing resources with each other. Finally, we use SUMO to simulate the road network topology and generate vehicle traffic trajectories. We construct heterogeneous vehicular applications for simulation experiments using the DAG generator. Compared with existing algorithms, the simulation results validate the superiority of the SCMA algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call