Abstract

Cellular- Vehicle-to-Everything (C- V2X) communication as standardized in the 3rd generation partnership project (3GPP) plays an essential role in enabling fully autonomous driving. C- V2X envisions supporting various use-cases, e.g., platooning and remote driving, with varying quality of service (QoS) requirements regarding latency, reliability, data rate, and positioning. In order to ensure meeting these stringent QoS requirements in realistic mobility scenarios, an intelligent and efficient resource allocation scheme is required. This paper addresses channel congestion in location-based resource allocation based on Deep Reinforcement Learning (DRL) for vehicle user equipment (V-UE) in dynamic groupcast communication, i.e., without a V-UE acting as a group head. Using DRL base station acts as a centralized agent. It adapts the channel congestion due to vehicle density in resource pools segregated based on location in a TAPASCologne scenario in the Simulation of Urban Mobility (SUMO) platform. A system-level simulation shows that a DRL-based congestion approach can achieve a better packet reception ratio (PRR) than a legacy congestion control scheme when resource pools are segregated based on location.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call