Dynamic and autonomous SideLink (SL) Radio Resource Management (RRM) is essential for platoon-based cellular vehicular networks. However, this task is challenging due to several factors. These include the limited spectrum below 6 GHz, stringent vehicle-to-everything (V2X) communications requirements, uncertain and dynamic environments, limited vehicle sensing capabilities, and inherent distributed operation. These limitations often lead to resource collisions, data packet loss, and increased latency. Current standardized approaches in Long-Term Evolution-V2X (LTE-V2X) and New Radio-V2X (NR-V2X) rely on random resource selection, limiting their efficiency. Moreover, RRM is inherently a complex combinatorial optimization problem. It may involve conflicting objectives and constraints, making traditional approaches inadequate. Platoon-based communication necessitates careful resource allocation to support a diverse mix of communication types. These include safety-critical control messaging within platoons, less time-sensitive traffic management information between platoons, and even infotainment services like media streaming. Optimizing resource sharing inter- and intra- platoons is crucial to avoid excessive interference and ensure overall network performance. Deep Reinforcement Learning (DRL), combining Deep Learning (DL) and Reinforcement Learning (RL), has recently been investigated for network resource management. It offers a potential solution for these challenges. A DRL agent, represented by deep neural networks, interacts with the environment and learns optimal decision-making through trial and error. This paper overviews proposed DRL-based methods for autonomous SL RRM in single and multi-agent platoon-based C-V2X networks. It considers both intra- and inter-platoon communications with their specific requirements. We discuss the components of Markov Decision Processes (MDP) used to model the sequential decision-making of RRM. We then detail the DRL algorithms, training paradigms, and insights on the achieved results. Finally, we highlight challenges in existing works and suggest strategies for addressing them.
Read full abstract