Abstract

The wide application of edge cloud computing in the Internet of Vehicles (IoV) provides lower latency, more efficient computing power, and more reliable data transmission services for vehicle applications. Realistic vehicle applications frequently consist of multiple tasks with dependencies. Efficiently and quickly scheduling individual tasks for multiple vehicle applications to reduce latency and energy consumption is challenging. Our proposed approach leverages Deep Reinforcement Learning (DRL) to develop a task scheduling strategy that ensures real-time and efficient operations. We maximize the utilization of available resources by harnessing the computational capabilities of vehicles, multiple MEC servers, and a cloud server. Specifically, we model the dependencies of tasks using a Directed Acyclic Graph (DAG) and design dynamically adjustable weights for delay and energy consumption. Transforming the task offloading problem in a vehicle-edge-cloud environment, which considers dependencies, into a Markov Decision Process (MDP) enables us to tackle it effectively. To obtain optimized offloading decisions quickly, we employ Double Deep Q-Network (DDQN) along with specially designed mobility management strategies. A penalty mechanism is introduced in DDQN to impose penalties when the vehicle application is delayed beyond its deadline. Simulation results show that the proposed scheme can significantly decrease the latency and energy consumption of multi-applications compared to the other three schemes and ensure the successful execution of tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call