Abstract

In the typical scenario of the Internet of Vehicles (IoV), the edge servers (ESs) are laid out near the road side units (RSUs) to process the collected data for a variety of IoV services in real time. Generally, because ESs are lightweight compared with cloud servers, if the ESs are not appropriately distributed, it will cause the unbalanced workload of the ESs. Thus, developing an ES plan to avoid the risk of overload and improve the quality of service (QoS) remains a challenge. To tackle it, a deep reinforcement learning-based multi-objective edge server placement strategy, named DESP, is fully explored, to promote the coverage rate, the workload balancing and reduce the average delay of finishing tasks in the IoV. In particular, the Markov Decision Process (MDP) of the ES placement problem is formulated and the deep reinforcement learning, i.e., Deep Q-Network (DQN) is applied to obtain the optimal placement scheme achieving the multiple objectives above. At last, a real vehicular data set is used for assessing the validity of DESP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call