Abstract

In release 14 (Rel-14) Long Term Evolution (LTE), the 3rd generation partnership project (3GPP) standard has introduced Cellular Vehicle to Everything (C-V2X) communication to pave the way for future intelligent transport systems (ITS). C-V2X communication envisions supporting a diverse range of use cases with varying quality of service (QoS) requirements. For example, cooperative collision avoidance re-quires stringent reliability, while infotainment use cases require a high data throughput. C-V2X communication remains susceptible to performance degradation due to network congestion. This paper presents a centralized congestion control scheme for C-V2X communication based on the Deep Reinforcement Learning (DRL) framework. A performance evaluation of the algorithm is conducted based on system-level simulation based on TAPASCologne scenario in the Simulation of Urban Mobility (SUMO) platform. The results show the effectiveness of a DRL-based approach to achieve the packet reception ratio (PRR) as per the packet’s associated QoS while maintaining the average measured Channel Busy Ratio (CBR) below 0.65.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call