Network Slicing (NS) was proposed as a viable solution in Release 15 of Third Generation Partnership Project (3GPP) to allocate the limited resources among different service types for improving their Quality-of-Service (QoS). However, the advanced vehicular applications such as autonomous driving, platooning, remote driving, etc. have stringent QoS demands and the standard NS architecture is not sustainable for these services. Therefore, we propose a solution compatible with the standard 3GPP NS architecture that implements an Actor-Critic based Deep Reinforcement Learning (DRL) algorithm in the Network Slice Subnet Management Function (NSSMF). The algorithm allocates and manages the limited resources among different slices based on their real-time traffic demands. We generate real-time traffic for each service type and train the algorithm to improve the QoS of each service type in the network. The proposed method is evaluated for the training performance of the proposed algorithm as well as the Service level agreement Satisfaction Ratio (SSR) of each slice. The results exhibit that the proposed method not only improves SSR of each slice, but also performs well in case of increased node density in the network.
Read full abstract