Abstract

Resource management in the next-generation vehicle-to-everything (V2X) communication networks is a demanding research problem. It is difficult to achieve the best results if the resources are not managed efficiently. To address this challenge and facilitate the dynamic movement of network demands, efficient management of resources is crucial. In this context, we propose a duplex deep reinforcement learning (DDRL)-based resource management framework for next-generation V2X communication networks. Our framework considers multiple network state parameters and utilizes these resources during the learning phase to address resource management problems effectively. The key objective was to select the forthcoming resource control state through the duplex deep reinforcement learning phase, thereby optimizing resource utilization efficiency. To achieve this, we jointly implement time division duplexing (TDD) and frequency division duplexing (FDD), which allow for adaptive management of TDD and FDD to meet high mobility requirements and efficiently utilize limited radio resources. This study introduces a dynamic duplex deep Q-learning-assisted sequential decision-making algorithm to manage band resources for the next-generation V2X network. By employing this algorithm, we enable the efficient allocation of resources, considering factors such as latency, system throughput, network utilization, busy channel rate (BCR), and packet received rate. To validate the performance of our proposed framework, we conducted simulation experiments using benchmarking techniques. The results demonstrate that our framework outperforms benchmark approaches across various performance parameters. It exhibits superior efficiency in terms of latency, system throughput, network utilization, BCR, and packet received rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call