Abstract

Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.