Abstract

Given the rapid increase of various applications in vehicular networks, it is crucial to consider a flexible architecture to improve the Quality of Service (QoS). Utilizing Multi-access Edge Computing (MEC) as a distributed paradigm, with resource capabilities closer to the vehicles, would be a promising solution to reduce response time in such a network. However, MEC suffers from limited resources and is deprived of handling high mobilities with many diverse applications. This paper proposes cooperation between MEC and central cloud decisions for different vehicular application offloading. We formulate a new resource allocation problem to guarantee the required response time. To solve such an NP-hard problem, we utilize deep reinforcement learning, a proper computational model, to automatically learn the dynamics of the network state and rapidly capture an optimal solution. Extensive numerical analysis and results illustrate how our proposed scheme can achieve a high acceptance rate with a low response time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call