Abstract

The proliferation of smart transportation has significantly promoted explosive growth of the Internet of Vehicles (IoV). Especially, with the rapid development of the 5-th generation mobile networks (5G), amounts of computation-intensive and latency-sensitive vehicle applications impose heavy pressures to such limited vehicles. To address this issue, Multi-access Edge Computing (MEC) has served as a promising paradigm to facilitate those devices at the edge of wireless networks. Computation offloading is a critical technology that decides which tasks should be offloaded for minimizing the energy and time cost. However, conventional methods are inefficient to deal with dependency-aware subtask topologies from vehicles, which leads to low efficiency and waste of edge resources, especially for multi-vehicles scenarios. In this paper, we investigate the subtask topologies of IoV applications. Directed acyclic graphs (DAGs) are utilized to obtain the priority of task scheduling. Further, take privacy protection and optimal resource allocation into consideration, we put forward a distributed deep reinforcement learning (DRL) strategy based on policy gradient without information sharing in edge computing. Especially, both actor and critic network employ convolutional layer and transformer to learn the optimal mapping from the input states to the offloading decision for each subtasks. Numerical results show that, for various experimental settings, compared with the existing offloading methods, the proposed scheme achieves the superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call