Abstract

The Uplink (UL) and Downlink (DL) decoupled cellular access through flexible cell association has attracted a lot of attention due to numerous benefits such as higher network throughput, better load balancing, and lower energy consumption, etc. In this paper, we introduce a novel reinforcement learning aided decoupled RAN access framework for Cellular Vehicle-to-Everything (V2X) communications, and propose a two-step RAN slicing approach to dynamically allocate the radio resource to V2X services in different time granularity. We derive an innovative QoS metric of V2V cellular mode by taking consideration of the bidirectional nature of V2V cellular communications. Moreover, we maximize the sum utility considering the proposed QoS metric by leveraging the Deep Deterministic Policy Gradient (DDPG) enabled RAN slicing method. Simulation results are provided to demonstrate the advance of the proposed reinforcement learning aided decoupled RAN slicing framework in achieving load balancing, maximizing total network utility and satisfying the QoS metric of Cellular V2X communications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call