The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.
Read full abstract