Abstract

In recent years, with the rapid development of Internet of Things (IoTs) and artificial intelligence, vehicular networks have transformed from simple interactive systems to smart integrated networks. The accompanying intelligent connected vehicles (ICVs) can communicate with each other and connect to the urban traffic information network, to support intelligent applications, i.e., autonomous driving, intelligent navigation, and in-vehicle entertainment services. These applications are usually delay-sensitive and compute-intensive, with the result that the computation resources of vehicles cannot meet the quality requirements of service for vehicles. To solve this problem, vehicular edge computing networks (VECNs) that utilize mobile edge computing offloading technology are seen as a promising paradigm. However, existing task offloading schemes lack consideration of the highly dynamic feature of vehicular networks, which makes them unable to give time-varying offloading decisions for dynamic changes in vehicular networks. Meanwhile, the current mobility model cannot truly reflect the actual road traffic situation. Toward this end, we study the task offloading problem in VECNs with the synchronized random walk model. Then, we propose a reinforcement learning-based scheme as our solution, and verify its superior performance in processing delay reduction and dynamic scene adaptability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call