Abstract
This paper discusses the need for advanced machine learning techniques to cope with the challenges brought for by densification of cells often seen as a possible solution to achieve the speed and QoS goals of 6G cellular networks. Machine Learning (ML) models such as Deep Reinforcement Learning (DRL) have demonstrated suitable solutions in a shorter time than conventional algorithms. However, the training and execution action phases often consider the same simulation conditions, which generate overestimated results. Therefore, it is necessary to reduce the adverse performance effects in the sim2real paradigm. The problem of optimal resource allocation is addressed, underlying how decentralized deep reinforcement learning (DRL) is a promising approach to balance goodness of solutions with needed compute resources. A Deep Q-Network (DQN) algorithm was implemented to solve the power allocation problem in a small-cell network and compared it with state-of-the-art benchmark algorithms. The paper discusses the challenges in training and testing such models and includes a brief performance comparison from simulation of different resource allocation algorithms against three DQN models. The numerical results show the impact on the DQN performance for different training conditions. The design of the training model influences the robustness of the DRL algorithm against unknown conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.