Abstract

Vehicular edge computing has become an appealing paradigm to provide the delay-sensitive and multimedia-rich services by densely deploying the roadside units (RSUs) placed with edge servers. However, due to the geographical difference and energy efficiency, the computation loads of RSUs are serious unbalance, and the state of RSUs may switch to sleep in some cases for saving energy. This paper proposes a novel model of task scheduling where the state of the RSUs may dynamically switch between sleep and work in vehicular edge computing. The task requests of vehicles are modeled as an independent Poisson stream, and each edge server in RSU is modeled as a simple M/M/1 queuing system. The problem of minimizing the total delay of tasks on the proposed model is formulated and the NP-hardness of the problem is proved in this paper. A greedy algorithm is proposed for solving the problem by carefully selecting the RSUs. Meanwhile, a tabu search algorithm is customized to refine the solution generated by the proposed greedy algorithm. Moreover, a deep Q-network based algorithm is proposed utilizing the deep reinforcement learning approach, to learn the optimal scheduling policy without the prior knowledge of dynamic statistics. Simulation results show that, the deep Q-network based algorithm is the best among the proposed algorithms in terms of the total response time of tasks. All the proposed algorithms perform better than the random algorithm also presented in this paper. For example, the total response time of tasks for the deep Q-network based algorithm decreases by 24.13%, 28.73% and 35.95%, compared with the customized tabu search algorithm, the greedy algorithm and the random algorithm, respectively, for the case of that the maximum tolerant response time of each task is 14s.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call