Abstract
Task offloading is recognized as a promising approach to enhance the computational performance of Connected Autonomous Vehicles (CAVs). Some applications of CAVs, such as metaverse applications, require substantial resources, posing significant challenges to CAVs with limited computing and storage capacities. CAVs can offload resource-intensive applications to the vehicular edge computing (VEC) server, which has strong computing capabilities. To fully utilize the resources in the CAV system, partial offloading is employed. However, the local computing resources are limited for continuously generated partial offloading tasks. This results in many partially locally executed tasks experiencing long processing times or being discarded, which is detrimental to delay-sensitive tasks on CAVs. This paper proposes Lyapunov function guided reinforcement learning for the CAVs task offloading computational framework, LyDRL. Specifically, LyDRL first uses the Lyapunov function to transform the long-term objective optimization problem into subproblems determined at each time slot. In each time slot, deep reinforcement learning is used to obtain the optimal task offloading decision while satisfying the constraints. Simulation results show that compared with existing algorithms, the proposed strategy can ensure the stability of the CAVs system and achieve the lowest system overhead.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have