Abstract

With the rapid development of Intelligent Transportation Systems (ITS), many new applications for Intelligent Connected Vehicles (ICVs) have sprung up. In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles, computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention. However, the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges. In this paper, we propose a heterogeneous Vehicular Edge Computing (VEC) architecture with Task Vehicles (TaVs), Service Vehicles (SeVs) and Roadside Units (RSUs), and propose a distributed algorithm, namely PG-MRL, which jointly optimizes offloading decision and resource allocation. In the first stage, the offloading decisions of TaVs are obtained through a potential game. In the second stage, a multi-agent Deep Deterministic Policy Gradient (DDPG), one of deep reinforcement learning algorithms, with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection. The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call