Abstract

In recent years, the number of vehicles connected to the Internet has increased explosively, and those vehicles have produced a large number of computing-intensive and delay-sensitive applications, which has brought severe challenges to the Internet of Vehicle (IoV). To effectively alleviate this situation, mobile edge computing (MEC) is proposed, which allows vehicles to offload tasks to edge server for processing. But in the real traffic environment, vehicle congestions in the morning-evening rush hours will lead to a sudden increase in the number of tasks. The traditional fixed Base Stations (BSs) are subject to geographical factors, which cannot cope with this situation and restricts the development of MEC. Therefore, we introduce Unmanned Aerial Vehicles (UAVs) into the system to improve the mobility of the system. Computation offloading is a critical technology that decides when and where tasks should be offloaded to minimize the total cost. In this paper, we illustrate offloading decision problem as a Markov process, and propose an optimized deep reinforcement learning (DRL) method based on prioritized experience replay to improve training efficiency of the network. To fully mobilize the resources of vehicles, MEC servers and UAV, we propose user fairness factor. Evaluation results verify that the proposed algorithm performs more effective than the existing offloading methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call