Abstract

Network slicing (NS) is an emerging technology in recent years, which enables network operators to slice network resources (e.g., bandwidth, power, spectrum, etc.) in different types of slices, so that it can adapt to different application scenarios of 5 g network: enhanced mobile broadband (eMBB), massive machine-type communications (mMTC) and ultra-reliable and low-latency communications (URLLC). In order to allocate these sliced network resources more effectively to users with different needs, it is important that manage the allocation of network resources. Actually, in the practical network resource allocation problem, the resources of the base station (BS) are limited and the demand of each user for mobile services is different. To better deal with the resource allocation problem, more effective methods and algorithms have emerged in recent years, such as the bidding method, deep learning (DL) algorithm, ant colony algorithm (AG), and wolf colony algorithm (WPA). This paper proposes a two tier slicing resource allocation algorithm based on Deep Reinforcement Learning (DRL) and joint bidding in wireless access networks. The wireless virtual technology divides mobile operators into infrastructure providers (InPs) and mobile virtual network operators (MVNOs). This paper considers a single base station, multi-user shared aggregated bandwidth radio access network scenario and joins the MVNOs to fully utilize base station resources, and divides the resource allocation process into two tiers. The algorithm proposed in this paper takes into account both the utilization of base station (BS) resources and the service demand of mobile users (MUs). In the upper tier, each MVNO is treated as an agent and uses a combination of bidding and Deep Q network (DQN) allows the MVNO to get more resources from the base station. In the lower tier allocation process, each MVNO distributes the received resources to the users who are connected to it, which also uses the Dueling DQN method for iterative learning to find the optimal solution to the problem. The results show that in the upper tier, the total system utility function and revenue obtained by the proposed algorithm are about 5.4% higher than double DQN and about 2.6% higher than Dueling DQN; In the lower tier, the user service quality obtained by using the proposed algorithm is more stable, the system utility function and Se are about 0.5–2.7% higher than DQN and Double DQN, but the convergence is faster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call