Abstract

The interconnection of vehicles in the future fifth generation (5G) wireless ecosystem forms the so-called Internet of Vehicles (IoV). IoV offers new kinds of applications requiring delay-sensitive, compute-intensive, and bandwidth-hungry services. Mobile edge computing (MEC) and network slicing are two of the key enabler technologies in 5G networks that can be used to optimize the allocation of the network resources and guarantee the diverse requirements of IoV applications. As traditional model-based optimization techniques generally end up with NP-hard and strongly non-convex and nonlinear mathematical programming formulations, in this article, we introduce a model-free approach based on deep reinforcement learning (DRL) to solve the resource allocation problem in MEC-enabled IoV networks based on network slicing. Furthermore, the solution uses non-orthogonal multiple access (NOMA) to enable better exploitation of the scarce channel resources. The considered problem addresses jointly the channel and power allocation, the slice selection, and the vehicle selection (vehicle grouping). We model the problem as a single-agent Markov decision process. Then we solve it using DRL with the well-known deep Q learning (DQL) algorithm. We show that our approach is robust and effective under different network conditions compared to benchmark solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call