Abstract
Network slicing (NS) management devotes to providing various services to meet distinct requirements over the same physical communication infrastructure and allocating resources on demands. Considering a dense cellular network scenario that contains several NS over multiple base stations (BSs), it remains challenging to design a proper real-time inter-slice resource management strategy, so as to cope with frequent BS handover and satisfy the fluctuations of distinct service requirements. In this paper, we propose to formulate this challenge as a multi-agent reinforcement learning (MARL) problem in which each BS represents an agent. Then, we leverage graph attention network (GAT) to strengthen the temporal and spatial cooperation between agents. Furthermore, we incorporate GAT into deep reinforcement learning (DRL) and correspondingly design an intelligent real-time inter-slice resource management strategy. More specially, we testify the universal effectiveness of GAT for advancing DRL in the multi-agent system, by applying GAT on the top of both the value-based method deep Q-network (DQN) and a combination of policy-based and value-based method advantage actor-critic (A2C). Finally, we verify the superiority of the GAT-based MARL algorithms through extensive simulations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.